diff --git a/Makefile b/Makefile
index 2a50437c..4842e86e 100644
--- a/Makefile
+++ b/Makefile
@@ -6,10 +6,7 @@ getdeps:
.PHONY: docs
docs:
- @echo "Fetching external docs…"
- @find ./content/docs -maxdepth 1 -type l -delete
@python3 ./tools/fcl-fetch-version-data.py ./content/docs/_index.md.in > ./content/docs/_index.md
- python3 ./tools/docs-fetcher.py ./config.yaml
run:
hugo server --theme=flatcar --buildFuture --watch --disableFastRender --config ./config.yaml\,./tmp_modules.yaml
diff --git a/content/docs/.gitignore b/content/docs/.gitignore
index 9a5812bb..f20385a1 100644
--- a/content/docs/.gitignore
+++ b/content/docs/.gitignore
@@ -1,2 +1 @@
-*
-!_index.md.in
+_index.md
diff --git a/content/docs/latest/_index.md b/content/docs/latest/_index.md
new file mode 100644
index 00000000..185caf92
--- /dev/null
+++ b/content/docs/latest/_index.md
@@ -0,0 +1,249 @@
+---
+content_type: flatcar
+title: Flatcar Container Linux
+main_menu: true
+weight: 40
+---
+
+Flatcar Container Linux is a container optimized OS that ships a minimal OS
+image, which includes only the tools needed to run containers. The OS is
+shipped through an immutable filesystem and includes automatic atomic
+updates.
+
+
+### Getting Started
+
+If you're new to Flatcar and if you're looking for a brief introduction on getting Flatcar up and running, please have a look at our [quickstart guide][quick-start].
+
+### Installing Flatcar
+
+Flatcar Container Linux runs on most cloud providers, virtualization
+platforms and bare metal servers.
+
+#### Cloud Providers
+ * [Amazon EC2][ec2]
+ * [Microsoft Azure][azure]
+ * [Google Compute Engine][gce]
+ * [Equinix Metal][equinix-metal]
+ * [VMware][vmware]
+ * [DigitalOcean][digital-ocean]
+ * [Hetzner][hetzner]
+ * [OpenStack][openstack]
+
+#### Virtualization options
+It's easy to run a local Flatcar VM on your laptop for testing and debugging
+purposes. You can use any of the following options.
+
+ * [QEMU][qemu]
+ * [libVirt][libvirt]
+ * [VirtualBox][virtualbox] (not officially supported)
+ * [Vagrant][vagrant] (not officially supported)
+
+#### Bare Metal
+You can install Flatcar on bare metal machines in different ways: using ISO
+images, booting from PXE or iPXE, and even by running an installation
+script on an existing Linux system.
+
+ * [Installing from ISO images][boot-iso]
+ * [Booting with PXE][pxe]
+ * [Booting with iPXE][ipxe]
+ * [Installing with flatcar-install][install-to-disk]
+
+If you want to provide metadata to your baremetal machines, we recommend
+using [Matchbox][matchbox].
+
+#### Upgrading from CoreOS Container Linux
+
+Flatcar Container Linux is a drop-in replacement of CoreOS Container Linux.
+If you are a CoreOS Container Linux user looking for a replacement,
+checkout our guides to [migrate from CoreOS Container
+Linux][migrate-from-container-linux], or you can [update from CoreOS
+Container Linux][update-from-container-linux] directly.
+
+### Provisioning Tools
+
+[Ignition][ignition-what] is the recommended way to provision Flatcar
+Container Linux at first boot. Ignition uses a JSON configuration file,
+and it is recommended to generate it from the [Container Linux
+Config][container-linux-config] YAML format, which has additional features.
+The [Container Linux Config Transpiler][config-transpiler] converts a
+Container Linux Config to an Ignition config.
+
+ * [Understanding the Boot Process][ignition-boot]
+ * [Configuring the Network with Ignition][ignition-network]
+ * [Using metadata during provisioning][ignition-metadata]
+ * [Getting started with Butane][config-intro]
+ * [Examples of using Butane][config-examples]
+ * [Using Terraform to provision Flatcar Container Linux][terraform]
+ * [Extending the base OS with systemd-sysext images][sysext]
+
+### Setting Flatcar Up and Common Operations
+
+Follow these guides to connect your machines together as a cluster,
+configure machine parameters, create users, inject multiple SSH keys, and
+more.
+
+#### Customizing Flatcar
+ * [Using networkd to customize networking][networkd-customize]
+ * [Using systemd drop-in units][systemd-drop-in]
+ * [Using environment variables in systemd units][environment-variables-systemd]
+ * [Using systemd and udev rules][udev-rules]
+ * [Using NVIDIA GPUs on Flatcar][using-nvidia]
+ * [Scheduling tasks with systemd timers][tasks-with-systemd]
+ * [Configuring DNS][dns]
+ * [Configuring date & timezone][date-timezone]
+ * [Adding users][users]
+ * [Kernel modules / sysctl parameters][parameters]
+ * [Adding swap][swap]
+ * [Power management][power-management]
+ * [ACPI][acpi]
+
+#### Managing Releases and Updates
+ * [Switching release channels][release-channels]
+ * [Configuring the update strategy][update-strategies]
+ * [Flatcar update configuration specification][update-conf]
+ * [Verifying Flatcar Images with GPG][verify-container-linux]
+
+#### Creating Clusters
+ * [Cluster architectures][cluster-architectures]
+ * [Clustering machines][clustering-machines]
+ * [Using Amazon EC2 Container Service][ec2-container-service]
+
+#### Managing Storage
+ * [Using RAID for the root filesystem][filesystem-placement]
+ * [Adding disk space][disk-space]
+ * [Mounting storage][mounting-storage]
+ * [iSCSI configuration][iscsi]
+
+#### Additional security options
+ * [Customizing the SSH daemon][ssh-daemon]
+ * [Configuring SSSD on Flatcar Container Linux][sssd-container-linux]
+ * [Hardening a Flatcar Container Linux machine][hardening-container-linux]
+ * [Trusted Computing Hardware Requirements][hardware-requirements]
+ * [Adding Cert Authorities][cert-authorities]
+ * [Using SELinux][selinux]
+ * [Disabling SMT][disabling-smt]
+ * [Enabling FIPS][enabling-fips]
+ * [Using the audit subsystem][audit-system]
+
+#### Debugging Flatcar
+ * [Install debugging tools][debugging-tools]
+ * [Working with btrfs][btrfs]
+ * [Reading the system log][system-log]
+ * [Collecting crash logs][crash-log]
+ * [Manual Flatcar Container Linux rollbacks][container-linux-rollbacks]
+
+### Container Runtimes
+Flatcar Container Linux supports all of the popular methods for running
+containers, and you can choose to interact with the containers at a
+low-level, or use a higher level orchestration framework. Listed below are
+some guides to help you choose and make use of the different runtimes.
+
+ * [Getting started with Docker][docker]
+ * [Customizing Docker][customizing-docker]
+ * [Using systemd to manage Docker containers][manage-docker-containers]
+ * [Use a custom Docker or containerd version][use-a-custom-docker-or-containerd-version]
+ * [Authenticating to Container registries][registry-authentication]
+ * [Getting started with Kubernetes][kubernetes]
+
+### Developer guides and Reference
+APIs and troubleshooting guides for working with Flatcar Container Linux.
+
+* [Developer guides][developer-guides]: Comprehensive guides on developing for Flatcar, working with the SDK, and on building and extending OS images.
+* [Integrations][integrations]
+* [Migrating from cloud-config to Container Linux Config][migrating-from-cloud-config]
+* [Flatcar Supply Chain Security (SLSA and SPDX SBOM)][supply-chain-security] detailing security mechanisms employed at build / release time as well as at run-time to ensure validity of inputs processed and outputs shipped.
+
+### Tutorial
+Flatcar tutorial to deep dive into some Flatcar fundamental concepts.
+* [Introduction][tutorial-introduction]
+* [Hands-on 1: Discovering][tutorial-hands-on-1]
+* [Hands-on 2: Provisioning][tutorial-hands-on-2]
+* [Hands-on 3: Deploying][tutorial-hands-on-3]
+* [Hands-on 4: Updating][tutorial-hands-on-4]
+
+[quick-start]: installing
+[supply-chain-security]: reference/supply-chain
+[ignition-what]: provisioning/ignition/
+[ignition-boot]: provisioning/ignition/boot-process
+[ignition-network]: provisioning/ignition/network-configuration
+[ignition-metadata]: provisioning/ignition/metadata
+[container-linux-config]: provisioning/cl-config/
+[config-transpiler]: provisioning/config-transpiler/
+[config-intro]: provisioning/config-transpiler/getting-started
+[config-dynamic-data]: provisioning/config-transpiler/dynamic-data
+[config-examples]: provisioning/config-transpiler/examples
+[matchbox]: https://matchbox.psdn.io/
+[ipxe]: installing/bare-metal/booting-with-ipxe
+[pxe]: installing/bare-metal/booting-with-pxe
+[install-to-disk]: installing/bare-metal/installing-to-disk
+[boot-iso]: installing/bare-metal/booting-with-iso
+[filesystem-placement]: setup/storage/raid
+[migrate-from-container-linux]: migrating-from-coreos/
+[update-from-container-linux]: migrating-from-coreos/update-from-container-linux
+[ec2]: installing/cloud/aws-ec2
+[digital-ocean]: installing/cloud/digitalocean
+[gce]: installing/cloud/gcp
+[azure]: installing/cloud/azure
+[qemu]: installing/vms/qemu
+[equinix-metal]: installing/cloud/equinix-metal
+[libvirt]: installing/vms/libvirt
+[virtualbox]: installing/vms/virtualbox
+[vagrant]: installing/vms/vagrant
+[vmware]: installing/cloud/vmware
+[cluster-architectures]: setup/clusters/architectures
+[update-strategies]: setup/releases/update-strategies
+[clustering-machines]: setup/clusters/discovery
+[verify-container-linux]: setup/releases/verify-images
+[networkd-customize]: setup/customization/network-config-with-networkd
+[systemd-drop-in]: setup/systemd/drop-in-units
+[environment-variables-systemd]: setup/systemd/environment-variables
+[dns]: setup/customization/configuring-dns
+[date-timezone]: setup/customization/configuring-date-and-timezone
+[users]: setup/customization/adding-users
+[parameters]: setup/customization/other-settings
+[disk-space]: setup/storage/adding-disk-space
+[mounting-storage]: setup/storage/mounting-storage
+[power-management]: setup/customization/power-management
+[registry-authentication]: container-runtimes/registry-authentication
+[iscsi]: setup/storage/iscsi
+[swap]: setup/storage/adding-swap
+[ec2-container-service]: setup/clusters/booting-on-ecs/
+[manage-docker-containers]: setup/systemd/getting-started
+[udev-rules]: setup/systemd/udev-rules
+[update-conf]: setup/releases/update-conf
+[release-channels]: setup/releases/switching-channels
+[tasks-with-systemd]: setup/systemd/timers
+[ssh-daemon]: setup/security/customizing-sshd
+[sssd-container-linux]: setup/security/sssd
+[hardening-container-linux]: setup/security/hardening-guide
+[hardware-requirements]: setup/security/trusted-computing-hardware-requirements
+[cert-authorities]: setup/security/adding-certificate-authorities
+[selinux]: setup/security/selinux
+[disabling-smt]: setup/security/disabling-smt
+[enabling-fips]: setup/security/fips
+[audit-system]: setup/security/audit
+[debugging-tools]: setup/debug/install-debugging-tools
+[btrfs]: setup/debug/btrfs-troubleshooting
+[system-log]: setup/debug/reading-the-system-log
+[crash-log]: setup/debug/collecting-crash-logs
+[container-linux-rollbacks]: setup/debug/manual-rollbacks
+[docker]: container-runtimes/getting-started-with-docker
+[customizing-docker]: container-runtimes/customizing-docker
+[use-a-custom-docker-or-containerd-version]: container-runtimes/use-a-custom-docker-or-containerd-version
+[developer-guides]: reference/developer-guides/
+[integrations]: reference/integrations/
+[migrating-from-cloud-config]: provisioning/cl-config/from-cloud-config
+[containerd-for-kubernetes]: container-runtimes/switching-from-docker-to-containerd-for-kubernetes
+[terraform]: provisioning/terraform/
+[hetzner]: installing/cloud/hetzner
+[sysext]: provisioning/sysext/
+[acpi]: setup/customization/ACPI
+[openstack]: installing/cloud/openstack
+[kubernetes]: container-runtimes/getting-started-with-kubernetes
+[using-nvidia]: setup/customization/using-nvidia
+[tutorial-introduction]: tutorial/
+[tutorial-hands-on-1]: tutorial/hands-on-1
+[tutorial-hands-on-2]: tutorial/hands-on-2
+[tutorial-hands-on-3]: tutorial/hands-on-3
+[tutorial-hands-on-4]: tutorial/hands-on-4
diff --git a/content/docs/latest/container-runtimes/_index.md b/content/docs/latest/container-runtimes/_index.md
new file mode 100644
index 00000000..09837b7e
--- /dev/null
+++ b/content/docs/latest/container-runtimes/_index.md
@@ -0,0 +1,10 @@
+---
+title: Container Runtimes
+description: >
+ Flatcar Container Linux supports all of the popular methods for running
+ containers, and you can choose to interact with the containers at a
+ low-level, or use a higher level orchestration framework. These guides
+ can help you choose and use the different container runtimes supported.
+weight: 60
+---
+
diff --git a/content/docs/latest/container-runtimes/customizing-docker.md b/content/docs/latest/container-runtimes/customizing-docker.md
new file mode 100644
index 00000000..ea418528
--- /dev/null
+++ b/content/docs/latest/container-runtimes/customizing-docker.md
@@ -0,0 +1,383 @@
+---
+title: Customizing Docker
+description: >
+ How to select which runtime to use, make docker available on a
+ TCP socket, enable TLS, and other customizations.
+weight: 30
+aliases:
+ - ../os/customizing-docker
+---
+
+The Docker systemd unit can be customized by overriding the unit that ships with the default Flatcar Container Linux settings or through a drop-in unit. Common use-cases for doing this are covered below.
+
+For switching to using containerd with Kubernetes, there is an [extra guide](../switching-from-docker-to-containerd-for-kubernetes/).
+
+## Use a custom containerd configuration
+
+The default configuration under `/run/torcx/unpack/docker/usr/share/containerd/config.toml` can't be changed but you can copy it to `/etc/containerd/config.toml` and modify it.
+**NOTE** that newer Flatcar major releases (above major release version 3760) ship the default configuration under `/usr/share/containerd/config.toml`.
+
+Create a `/etc/systemd/system/containerd.service.d/10-use-custom-config.conf` unit drop-in file to select the new configuration:
+
+```ini
+[Service]
+ExecStart=
+ExecStart=/usr/bin/containerd
+```
+
+On a running system, execute `systemctl daemon-reload ; systemctl restart containerd` for it to take effect.
+
+## Enable the remote API on a new socket
+
+Create a file called `/etc/systemd/system/docker-tcp.socket` to make Docker available on a TCP socket on port 2375.
+
+```ini
+[Unit]
+Description=Docker Socket for the API
+
+[Socket]
+ListenStream=2375
+BindIPv6Only=both
+Service=docker.service
+
+[Install]
+WantedBy=sockets.target
+```
+
+Then enable this new socket:
+
+```shell
+systemctl enable docker-tcp.socket
+systemctl stop docker
+systemctl start docker-tcp.socket
+systemctl start docker
+```
+
+Test that it's working:
+
+```shell
+docker -H tcp://127.0.0.1:2375 ps
+```
+
+### Butane Config
+
+To enable the remote API on every Flatcar Container Linux machine in a cluster, use a [Butane Config][butane-configs]. We need to provide the new socket file and Docker's socket activation support will automatically start using the socket:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: docker-tcp.socket
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Docker Socket for the API
+
+ [Socket]
+ ListenStream=2375
+ BindIPv6Only=both
+ Service=docker.service
+
+ [Install]
+ WantedBy=sockets.target
+```
+
+To keep access to the port local, replace the `ListenStream` configuration above with:
+
+```yaml
+ [Socket]
+ ListenStream=127.0.0.1:2375
+```
+
+## Enable the remote API with TLS authentication
+
+Docker TLS configuration consists of three parts: keys creation, configuring new [systemd socket][systemd-socket] unit and systemd [drop-in][drop-in] configuration.
+
+### TLS keys creation
+
+Please follow the [instruction][self-signed-certs] to know how to create self-signed certificates and private keys. Then copy the following files into `/etc/docker` Flatcar Container Linux's directory and fix their permissions:
+
+```shell
+scp ~/cfssl/{server.pem,server-key.pem,ca.pem} flatcar.example.com:
+ssh core@flatcar.example.com
+sudo mv {server.pem,server-key.pem,ca.pem} /etc/docker/
+sudo chown root:root /etc/docker/{server-key.pem,server.pem,ca.pem}
+sudo chmod 0600 /etc/docker/server-key.pem
+```
+
+On your local host copy certificates into `~/.docker`:
+
+```shell
+mkdir ~/.docker
+chmod 700 ~/.docker
+cd ~/.docker
+cp -p ~/cfssl/ca.pem ca.pem
+cp -p ~/cfssl/client.pem cert.pem
+cp -p ~/cfssl/client-key.pem key.pem
+```
+
+### Enable the secure remote API on a new socket
+
+Create a file called `/etc/systemd/system/docker-tls-tcp.socket` to make Docker available on a secured TCP socket on port 2376.
+
+```ini
+[Unit]
+Description=Docker Secured Socket for the API
+
+[Socket]
+ListenStream=2376
+BindIPv6Only=both
+Service=docker.service
+
+[Install]
+WantedBy=sockets.target
+```
+
+Then enable this new socket:
+
+```shell
+systemctl enable docker-tls-tcp.socket
+systemctl stop docker
+systemctl start docker-tls-tcp.socket
+```
+
+### Drop-in configuration
+
+Create `/etc/systemd/system/docker.service.d/10-tls-verify.conf` [drop-in][drop-in] for systemd Docker service:
+
+```ini
+[Service]
+Environment="DOCKER_OPTS=--tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server.pem --tlskey=/etc/docker/server-key.pem"
+```
+
+Reload systemd config files and restart docker service:
+
+```shell
+sudo systemctl daemon-reload
+sudo systemctl restart docker.service
+```
+
+Now you can access your Docker's API through TLS secured connection:
+
+```shell
+docker --tlsverify -H tcp://server:2376 images
+# or
+docker --tlsverify -H tcp://server.example.com:2376 images
+```
+
+If you've experienceed problems connection to remote Docker API using TLS connection, you can debug it with `curl`:
+
+```shell
+curl -v --cacert ~/.docker/ca.pem --cert ~/.docker/cert.pem --key ~/.docker/key.pem https://server:2376
+```
+
+Or on your Flatcar Container Linux host:
+
+```shell
+journalctl -f -u docker.service
+```
+
+In addition you can export environment variables and use docker client without additional options:
+
+```shell
+export DOCKER_HOST=tcp://server.example.com:2376 DOCKER_TLS_VERIFY=1
+docker images
+```
+
+### Butane Config (TLS)
+
+A Butane Config for Docker TLS authentication will look like:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/docker/ca.pem
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN CERTIFICATE-----
+ MIIFNDCCAx6gAwIBAgIBATALBgkqhkiG9w0BAQswLTEMMAoGA1UEBhMDVVNBMRAw
+ DgYDVQQKEwdldGNkLWNhMQswCQYDVQQLEwJDQTAeFw0xNTA5MDIxMDExMDhaFw0y
+ NTA5MDIxMDExMThaMC0xDDAKBgNVBAYTA1VTQTEQMA4GA1UEChMHZXRjZC1jYTEL
+ ... ... ...
+ - path: /etc/docker/server.pem
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN CERTIFICATE-----
+ MIIFajCCA1SgAwIBAgIBBTALBgkqhkiG9w0BAQswLTEMMAoGA1UEBhMDVVNBMRAw
+ DgYDVQQKEwdldGNkLWNhMQswCQYDVQQLEwJDQTAeFw0xNTA5MDIxMDM3MDFaFw0y
+ NTA5MDIxMDM3MDNaMEQxDDAKBgNVBAYTA1VTQTEQMA4GA1UEChMHZXRjZC1jYTEQ
+ ... ... ...
+ - path: /etc/docker/server-key.pem
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN RSA PRIVATE KEY-----
+ MIIJKAIBAAKCAgEA23Q4yELhNEywScrHl6+MUtbonCu59LIjpxDMAGxAHvWhWpEY
+ P5vfas8KgxxNyR+U8VpIjEXvwnhwCx/CSCJc3/VtU9v011Ir0WtTrNDocb90fIr3
+ YeRWq744UJpBeDHPV9opf8xFE7F74zWeTVMwtiMPKcQDzZ7XoNyJMxg1wmiMbdCj
+ ... ... ...
+systemd:
+ units:
+ - name: docker-tls-tcp.socket
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Docker Secured Socket for the API
+
+ [Socket]
+ ListenStream=2376
+ BindIPv6Only=both
+ Service=docker.service
+
+ [Install]
+ WantedBy=sockets.target
+ - name: docker.service
+ dropins:
+ - name: flags.conf
+ contents: |
+ [Service]
+ Environment="DOCKER_OPTS=--tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server.pem --tlskey=/etc/docker/server-key.pem"
+```
+
+## Use attached storage for Docker images
+
+Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantageous to use attached storage to expand your capacity for container images. Check out the guide to [mounting storage to your Flatcar Container Linux machine][mounting-storage] for an example of how to bind mount storage into `/var/lib/docker`.
+
+## Enabling the Docker debug flag
+
+Set the `--debug` (`-D`) flag in the `DOCKER_OPTS` environment variable by using a drop-in file. For example, the following could be written to `/etc/systemd/system/docker.service.d/10-debug.conf`:
+
+```ini
+[Service]
+Environment=DOCKER_OPTS=--debug
+```
+
+Now tell systemd about the new configuration and restart Docker:
+
+```shell
+systemctl daemon-reload
+systemctl restart docker
+```
+
+To test our debugging stream, run a Docker command and then read the systemd journal, which should contain the output:
+
+```shell
+docker ps
+journalctl -u docker
+```
+
+### Butane Config (flags)
+
+If you need to modify a flag across many machines, you can add the flag with a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: docker.service
+ dropins:
+ - name: flags.conf
+ contents: |
+ [Service]
+ Environment="DOCKER_OPTS=--debug"
+```
+
+## Use an HTTP proxy
+
+If you're operating in a locked down networking environment, you can specify an HTTP proxy for Docker to use via an environment variable. First, create a directory for drop-in configuration for Docker:
+
+```shell
+mkdir /etc/systemd/system/docker.service.d
+```
+
+Now, create a file called `/etc/systemd/system/docker.service.d/http-proxy.conf` that adds the environment variable:
+
+```ini
+[Service]
+Environment="HTTP_PROXY=http://proxy.example.com:8080"
+```
+
+To apply the change, reload the unit and restart Docker:
+
+```shell
+systemctl daemon-reload
+systemctl restart docker
+```
+
+Proxy environment variables can also be set [system-wide][systemd-env-vars].
+
+### Butane Config (proxy)
+
+The easiest way to use this proxy on all of your machines is via a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: docker.service
+ enabled: true
+ dropins:
+ - name: 20-http-proxy.conf
+ contents: |
+ [Service]
+ Environment="HTTP_PROXY=http://proxy.example.com:8080"
+```
+
+## Increase ulimits
+
+If you need to increase certain ulimits that are too low for your application by default, like memlock, you will need to modify the Docker service to increase the limit. First, create a directory for drop-in configuration for Docker:
+
+```shell
+mkdir /etc/systemd/system/docker.service.d
+```
+
+Now, create a file called `/etc/systemd/system/docker.service.d/increase-ulimit.conf` that adds increased limit:
+
+```ini
+[Service]
+LimitMEMLOCK=infinity
+```
+
+To apply the change, reload the unit and restart Docker:
+
+```shell
+systemctl daemon-reload
+systemctl restart docker
+```
+
+### Butane Config (ulimits)
+
+The easiest way to use these new ulimits on all of your machines is via a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: docker.service
+ enabled: true
+ dropins:
+ - name: 30-increase-ulimit.conf
+ contents: |
+ [Service]
+ LimitMEMLOCK=infinity
+```
+
+## Using a dockercfg file for authentication
+
+A json file `.dockercfg` can be created in your home directory that holds authentication information for a public or private Docker registry.
+
+[docker-socket-systemd]: https://github.com/docker/docker/pull/17211
+[drop-in]: ../setup/systemd/drop-in-units
+[mounting-storage]: ../setup/storage/mounting-storage
+[self-signed-certs]: ../setup/security/generate-self-signed-certificates
+[systemd-socket]: https://www.freedesktop.org/software/systemd/man/systemd.socket.html
+[systemd-env-vars]: ../setup/systemd/environment-variables/#system-wide-environment-variables
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/container-runtimes/getting-started-with-docker.md b/content/docs/latest/container-runtimes/getting-started-with-docker.md
new file mode 100644
index 00000000..275dd1d7
--- /dev/null
+++ b/content/docs/latest/container-runtimes/getting-started-with-docker.md
@@ -0,0 +1,175 @@
+---
+title: Getting started with Docker
+description: Basic Docker operations on Flatcar
+weight: 10
+aliases:
+ - ../os/getting-started-with-docker
+---
+
+Docker is an open-source project that makes creating and managing Linux containers really easy. Containers are like extremely lightweight VMs – they allow code to run in isolation from other containers but safely share the machine’s resources, all without the overhead of a hypervisor.
+
+Docker containers can boot extremely fast (in milliseconds!) which gives you unprecedented flexibility in managing load across your cluster. For example, instead of running chef on each of your VMs, it’s faster and more reliable to have your build system create a container and launch it on the appropriate number of Flatcar Container Linux hosts. This guide will show you how to launch a container, install some software on it, commit that container, and optionally launch it on another Flatcar Container Linux machine. Before starting, make sure you've got at least one Flatcar Container Linux machine up and running — try it on [Amazon EC2][aws-ec2] or locally with [Vagrant][vagrant].
+
+## Docker CLI basics
+
+Docker has a [straightforward CLI][docker-cli] that allows you to do almost everything you could want to a container. All of these commands use the image id (ex. be29975e0098), the image name (ex. myusername/webapp) and the container id (ex. 72d468f455ea) interchangeably depending on the operation you are trying to do. This is confusing at first, so pay special attention to what you're using.
+
+## Launching a container
+
+Launching a container is simple as `docker run` + the image name you would like to run + the command to run within the container. If the image doesn't exist on your local machine, Docker will attempt to fetch it from the public image registry. Later we'll explore how to use Docker with a private registry. It's important to note that containers are designed to stop once the command executed within them has exited. For example, if you ran `/bin/echo hello world` as your command, the container will start, print hello world and then stop:
+
+```shell
+docker run ubuntu /bin/echo hello world
+```
+
+Let's launch an Ubuntu container and install Apache inside of it using the bash prompt:
+
+```shell
+docker run -t -i ubuntu /bin/bash
+```
+
+The `-t` and `-i` flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running. Install Apache with `apt-get update && apt-get install apache2`. You're probably wondering what address you can connect to in order to test that Apache was correctly installed...we'll get to that after we commit the container.
+
+## Committing a container
+
+After that completes, we need to `commit` these changes to our container with the container ID and the image name.
+
+To find the container ID, open another shell (so the container is still running) and read the ID using `docker ps`.
+
+The image name is in the format of `username/name`. We're going to use `flatcar` as our username in this example but you should [sign up for a Docker.IO user account][docker-signup] and use that instead.
+
+It's important to note that you can commit using any username and image name locally, but to push an image to the public registry, the username must be a valid [Docker.IO user account][docker-signup].
+
+Commit the container with the container ID, your username, and the name `apache`:
+
+```shell
+docker commit 72d468f455ea myname/myapache
+```
+
+The overlay filesystem works similar to git: our image now builds off of the `ubuntu` base and adds another layer with Apache on top. These layers get cached separately so that you won't have to pull down the ubuntu base more than once.
+
+## Keeping the Apache container running
+
+Now we have our Ubuntu container with Apache running in one shell and an image of that container sitting on disk. Let's launch a new container based on that image but set it up to keep running indefinitely. The basic syntax looks like this, but we need to configure a few additional options that we'll fill in as we go:
+
+```shell
+docker run [options] [image] [process]
+```
+
+The first step is to tell Docker that we want to run our `myname/myapache` image:
+
+```shell
+docker run [options] myname/myapache [process]
+```
+
+### Run container detached
+
+When running Docker containers manually, the most important option is to run the container in detached mode with the `-d` flag. This will output the container ID to show that the command was successful, but nothing else. At any time you can run `docker ps` in the other shell to view a list of the running containers. Our command now looks like:
+
+```shell
+docker run -d myname/myapache [process]
+```
+
+After you are comfortable with the mechanics of running containers by hand, it's recommended to use [systemd units][systemd-getting-started] to run your containers on a cluster of Flatcar Container Linux machines.
+
+Do not run containers with detached mode inside of systemd unit files. Detached mode prevents your init system, in our case systemd, from monitoring the process that owns the container because detached mode forks it into the background. To prevent this issue, just omit the `-d` flag if you aren't running something manually.
+
+### Run Apache in foreground
+
+We need to run the apache process in the foreground, since our container will stop when the process specified in the `docker run` command stops. We can do this with a flag `-D` when starting the apache2 process:
+
+```shell
+/usr/sbin/apache2ctl -D FOREGROUND
+```
+
+Let's add that to our command:
+
+```shell
+docker run -d myname/myapache /usr/sbin/apache2ctl -D FOREGROUND
+```
+
+### Permanently running a container
+
+While the sections above explained how to run a container when configuring it, for a production setup, you should not manually start and babysit containers.
+
+Instead, create a systemd unit file to make systemd keep that container running. See [Getting Started with systemd][systemd-getting-started] for details.
+
+Alternatively, Docker also has a feature to start existing containers on boot, when the container has the `restart` attribute set to `always`.
+This requires the Docker service to get started on boot instead of using the default socket activation that starts on-demand.
+
+Here is a Butane Config to enable the Docker service while disabling socket activation:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ # Ensure docker starts automatically instead of being only socket-activated
+ - name: docker.service
+ enabled: true
+storage:
+ links:
+ - path: /etc/systemd/system/multi-user.target.wants/docker.service
+ target: /usr/lib/systemd/system/docker.service
+ hard: false
+ overwrite: true
+```
+
+**NOTE** for Flatcar versions prior to (older than) the 3761 major release the soft link is unnecessary. The following configuration suffices:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ # Ensure docker starts automatically instead of being only socket-activated
+ - name: docker.service
+ enabled: true
+```
+
+### Network access to 80
+
+The default apache install will be running on port 80. To give our container access to traffic over port 80, we use the `-p` flag and specify the port on the host that maps to the port inside the container. In our case we want 80 for each, so we include `-p 80:80` in our command:
+
+```shell
+docker run -d -p 80:80 myname/myapache /usr/sbin/apache2ctl -D FOREGROUND
+```
+
+You can now run this command on your Flatcar Container Linux host to create the container. You should see the default apache webpage when you load either `localhost:80` or the IP of your remote server. Be sure that any firewall or EC2 Security Group allows traffic to port 80.
+
+## Using the Docker registry
+
+Earlier we downloaded the ubuntu image remotely from the Docker public registry because it didn't exist on our local machine. We can also push local images to the public registry (or a private registry) very easily with the `push` command:
+
+```shell
+docker push myname/myapache
+```
+
+To push to a private repository the syntax is very similar. First, we must prefix our image with the host running our private registry instead of our username. List images by running `docker images` and insert the correct ID into the `tag` command:
+
+```shell
+docker tag f455ea72d468 registry.example.com:5000/myname/myapache
+```
+
+After tagging, the image needs to be pushed to the registry:
+
+```shell
+docker push registry.example.com:5000/myname/myapache
+```
+
+Once the image is done uploading, you should be able to start the exact same container on a different Flatcar Container Linux host by running:
+
+```shell
+docker run -d -p 80:80 registry.example.com:5000/myname/myapache /usr/sbin/apache2ctl -D FOREGROUND
+```
+
+## More information
+
+ * [Docker Website](http://www.docker.com/)
+ * [docker's Getting Started Guide](https://docs.docker.com/mac/started/)
+
+[aws-ec2]: ../installing/cloud/aws-ec2
+[vagrant]: ../installing/vms/vagrant
+[docker-cli]: https://docs.docker.com/engine/reference/commandline/cli/
+[docker-signup]: https://hub.docker.com/account/signup/
+[systemd-getting-started]: ../setup/systemd/getting-started
diff --git a/content/docs/latest/container-runtimes/getting-started-with-kubernetes.md b/content/docs/latest/container-runtimes/getting-started-with-kubernetes.md
new file mode 100644
index 00000000..d0399562
--- /dev/null
+++ b/content/docs/latest/container-runtimes/getting-started-with-kubernetes.md
@@ -0,0 +1,381 @@
+---
+title: Getting started with Kubernetes
+description: Operate Kubernetes from Flatcar
+aliases:
+ - ../os/switching-from-docker-to-containerd-for-kubernetes
+ - ./switching-from-docker-to-containerd-for-kubernetes
+weight: 11
+---
+
+One of the Flatcar purposes is to run container workloads, this term is quite generic: it goes from running a single Docker container to operate a Kubernetes cluster.
+
+This documentation will cover preliminary aspects of operating Kubernetes cluster based on Flatcar.
+
+# Supported Kubernetes version
+
+A Kubernetes basic scenario (deploy a simple Nginx) is being tested on Flatcar accross the channels and various CNIs, it mainly ensures that Kubernetes can be correctly installed and can operate in a simple way.
+
+One way to contribute to Flatcar would be to extend the covered CNIs (example: [kubenet][kubenet]) or to provide more complex scenarios (example: [cilium extension][cilium]).
+
+This is a compatibility matrix between Flatcar and Kubernetes deployed using vanilla components and Flatcar provided software:
+| :arrow_down: Flatcar channel \ Kubernetes Version :arrow_right: | 1.23 | 1.24 | 1.25 | 1.26 | 1.27 | 1.28 |
+|--------------------------------------|--------------------|--------------------|--------------------|--------------------|--------------------|---------------------------------|
+| Alpha | :large_orange_diamond: | :large_orange_diamond: | :white_check_mark: | :white_check_mark: |:white_check_mark: | :white_check_mark: |
+| Beta | :large_orange_diamond: | :large_orange_diamond: | :white_check_mark: | :white_check_mark: |:white_check_mark: | :white_check_mark: |
+| Stable | :large_orange_diamond: | :large_orange_diamond: | :white_check_mark: | :white_check_mark: |:white_check_mark: | :white_check_mark: |
+| LTS | :large_orange_diamond: | :large_orange_diamond: | :white_check_mark: | :x: |:x: | :x: |
+
+:large_orange_diamond:: The version is not tested anymore before a release but was known for working.
+
+Tested CNIs:
+- Cilium
+- Flannel
+- Calico
+
+_Known issues_:
+* Flannel > 0.17.0 does not work with enforced SELinux ([flatcar#779][flatcar-779])
+* Cilium needs to be patched regarding SELinux labels to work (even in permissive mode) ([flatcar#891][flatcar-891])
+
+# Deploy a Kubernetes cluster with Flatcar
+
+## Using Kubeadm
+
+`kubeadm` remains one standard way to quickly deploy and operate a Kubernetes cluster. It's possible to install the tools (`kubeadm`, `kubelet`, etc.) using Ignition or directly with the Kubernetes sysext image distributed from the [flatcar/sysext-bakery][sysext-bakery] release page.
+
+### Setup the control plane
+
+Here are two examples to setup a control plane with [Butane][butane]. The first example is using the systemd-sysext approach to bring in the binaries and update them through systemd-sysupdate. The second approach fetches the binaries but has no way of updating them in-place.
+
+
+
+
+
+
+ This is an example using systemd-sysext and systemd-sysupdate. NOTE: We are using
Kured to coordinate nodes reboot when there is a new Kubernetes sysext image available (or if Flatcar has been updated), hence the /run/reboot-required file.
+
+---
+version: 1.0.0
+variant: flatcar
+storage:
+ links:
+ - target: /opt/extensions/kubernetes/kubernetes-v1.27.4-x86-64.raw
+ path: /etc/extensions/kubernetes.raw
+ hard: false
+ files:
+ - path: /etc/sysupdate.kubernetes.d/kubernetes.conf
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/kubernetes.conf
+ - path: /etc/sysupdate.d/noop.conf
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/noop.conf
+ - path: /opt/extensions/kubernetes/kubernetes-v1.27.4-x86-64.raw
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/kubernetes-v1.27.4-x86-64.raw
+systemd:
+ units:
+ - name: systemd-sysupdate.timer
+ enabled: true
+ - name: systemd-sysupdate.service
+ dropins:
+ - name: kubernetes.conf
+ contents: |
+ [Service]
+ ExecStartPre=/usr/bin/sh -c "readlink --canonicalize /etc/extensions/kubernetes.raw > /tmp/kubernetes"
+ ExecStartPre=/usr/lib/systemd/systemd-sysupdate -C kubernetes update
+ ExecStartPost=/usr/bin/sh -c "readlink --canonicalize /etc/extensions/kubernetes.raw > /tmp/kubernetes-new"
+ ExecStartPost=/usr/bin/sh -c "[[ $(cat /tmp/kubernetes) != $(cat /tmp/kubernetes-new) ]] && touch /run/reboot-required"
+ - name: kubeadm.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Kubeadm service
+ Requires=containerd.service
+ After=containerd.service
+ ConditionPathExists=!/etc/kubernetes/kubelet.conf
+ [Service]
+ ExecStartPre=/usr/bin/kubeadm init
+ ExecStartPre=/usr/bin/mkdir /home/core/.kube
+ ExecStartPre=/usr/bin/cp /etc/kubernetes/admin.conf /home/core/.kube/config
+ ExecStart=/usr/bin/chown -R core:core /home/core/.kube
+ [Install]
+ WantedBy=multi-user.target
+
+
+
+
+
+ :warning: To ease the reading, we voluntarily omitted the checksums of the downloaded artifacts.
+
+---
+version: 1.0.0
+variant: flatcar
+storage:
+ files:
+ - path: /opt/bin/kubectl
+ mode: 0755
+ contents:
+ source: https://dl.k8s.io/v1.26.0/bin/linux/amd64/kubectl
+ - path: /opt/bin/kubeadm
+ mode: 0755
+ contents:
+ source: https://dl.k8s.io/v1.26.0/bin/linux/amd64/kubeadm
+ - path: /opt/bin/kubelet
+ mode: 0755
+ contents:
+ source: https://dl.k8s.io/v1.26.0/bin/linux/amd64/kubelet
+ - path: /etc/systemd/system/kubelet.service
+ contents:
+ source: https://raw.githubusercontent.com/kubernetes/release/v0.14.0/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service
+ - path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+ contents:
+ source: https://raw.githubusercontent.com/kubernetes/release/v0.14.0/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf
+ - path: /etc/kubeadm.yml
+ contents:
+ inline: |
+ apiVersion: kubeadm.k8s.io/v1beta2
+ kind: InitConfiguration
+ nodeRegistration:
+ kubeletExtraArgs:
+ volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
+ ---
+ apiVersion: kubeadm.k8s.io/v1beta2
+ kind: ClusterConfiguration
+ controllerManager:
+ extraArgs:
+ flex-volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
+systemd:
+ units:
+ - name: kubelet.service
+ enabled: true
+ dropins:
+ - name: 20-kubelet.conf
+ contents: |
+ [Service]
+ ExecStart=
+ ExecStart=/opt/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
+ - name: kubeadm.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Kubeadm service
+ Requires=containerd.service
+ After=containerd.service
+ ConditionPathExists=!/etc/kubernetes/kubelet.conf
+ [Service]
+ Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin"
+ ExecStartPre=/opt/bin/kubeadm config images pull
+ ExecStartPre=/opt/bin/kubeadm init --config /etc/kubeadm.yml
+ ExecStartPre=/usr/bin/mkdir /home/core/.kube
+ ExecStartPre=/usr/bin/cp /etc/kubernetes/admin.conf /home/core/.kube/config
+ ExecStart=/usr/bin/chown -R core:core /home/core/.kube
+ [Install]
+ WantedBy=multi-user.target
+
+
+
+
+
+
+
+This minimal configuration can be used with Flatcar on QEMU (:warning: be sure that the instance has enough memory: 4096mb is good).
+
+```bash
+butane < config.yaml > config.json
+./flatcar_production_qemu.sh -i config.json -- -display curses
+kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+localhost NotReady control-plane 6m5s v1.26.0
+```
+
+The control plane will appear has non-ready until a CNI is deployed, here's an example with calico:
+```bash
+kubectl \
+ apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
+kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+localhost Ready control-plane 8m30s v1.26.0
+```
+
+If you want to coordinate the nodes reboot when there is a new Kubernetes sysext image or a Flatcar update, you can deploy [`Kured`][kured]:
+```bash
+latest=$(curl -s https://api.github.com/repos/kubereboot/kured/releases | jq -r '.[0].tag_name')
+kubectl apply -f "https://github.com/kubereboot/kured/releases/download/$latest/kured-$latest-dockerhub.yaml"
+```
+
+We can now prepare the nodes to join the cluster.
+
+### Setup the nodes
+
+Here's are two examples for a [butane][butane] configuration to setup the nodes. The first example is using the systemd-sysext approach to bring in the binaries and update them through systemd-sysupdate. The second approach fetches the binaries but has no way of updating them in-place.
+
+
+
+
+
+
+ This is an example using systemd-sysext and systemd-sysupdate. NOTE: We are using
Kured to coordinate nodes reboot when there is a new Kubernetes sysext image available (or if Flatcar has been updated), hence the /run/reboot-required file.
+
+---
+version: 1.0.0
+variant: flatcar
+storage:
+ links:
+ - target: /opt/extensions/kubernetes/kubernetes-v1.27.4-x86-64.raw
+ path: /etc/extensions/kubernetes.raw
+ hard: false
+ files:
+ - path: /etc/sysupdate.kubernetes.d/kubernetes.conf
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/kubernetes.conf
+ - path: /etc/sysupdate.d/noop.conf
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/noop.conf
+ - path: /opt/extensions/kubernetes/kubernetes-v1.27.4-x86-64.raw
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/kubernetes-v1.27.4-x86-64.raw
+systemd:
+ units:
+ - name: systemd-sysupdate.timer
+ enabled: true
+ - name: systemd-sysupdate.service
+ dropins:
+ - name: kubernetes.conf
+ contents: |
+ [Service]
+ ExecStartPre=/usr/bin/sh -c "readlink --canonicalize /etc/extensions/kubernetes.raw > /tmp/kubernetes"
+ ExecStartPre=/usr/lib/systemd/systemd-sysupdate -C kubernetes update
+ ExecStartPost=/usr/bin/sh -c "readlink --canonicalize /etc/extensions/kubernetes.raw > /tmp/kubernetes-new"
+ ExecStartPost=/usr/bin/sh -c "[[ $(cat /tmp/kubernetes) != $(cat /tmp/kubernetes-new) ]] && touch /run/reboot-required"
+ - name: kubeadm.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Kubeadm service
+ Requires=containerd.service
+ After=containerd.service
+ [Service]
+ ExecStart=/usr/bin/kubeadm join $(output from 'kubeadm token create --print-join-command')
+ [Install]
+ WantedBy=multi-user.target
+
+
+
+
+
+ :warning: To ease the reading, we voluntarily omitted the checksums of the downloaded artifacts.
+
+---
+version: 1.0.0
+variant: flatcar
+storage:
+ files:
+ - path: /opt/bin/kubeadm
+ mode: 0755
+ contents:
+ source: https://dl.k8s.io/v1.26.0/bin/linux/amd64/kubeadm
+ - path: /opt/bin/kubelet
+ mode: 0755
+ contents:
+ source: https://dl.k8s.io/v1.26.0/bin/linux/amd64/kubelet
+ - path: /etc/systemd/system/kubelet.service
+ contents:
+ source: https://raw.githubusercontent.com/kubernetes/release/v0.14.0/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service
+ - path: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
+ contents:
+ source: https://raw.githubusercontent.com/kubernetes/release/v0.14.0/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf
+systemd:
+ units:
+ - name: kubelet.service
+ enabled: true
+ dropins:
+ - name: 20-kubelet.conf
+ contents: |
+ [Service]
+ ExecStart=
+ ExecStart=/opt/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
+ - name: kubeadm.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Kubeadm service
+ Requires=containerd.service
+ After=containerd.service
+ [Service]
+ Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin"
+ ExecStart=/opt/bin/kubeadm join $(output from 'kubeadm token create --print-join-command')
+ [Install]
+ WantedBy=multi-user.target
+
+
+
+
+
+
+This method is far from being ideal in terms of infrastructure as code as it requires a two steps manipulation: create the control plane to generate the join configuration then pass that configuration to the nodes. Other solutions exist to make things easier, like Cluster API or [Typhoon][typhoon].
+
+### Switching from Docker to containerd for Kubernetes
+
+In Kubernetes v1.20, `dockershim` was deprecated and it has been [officially](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#dockershim-removed-from-kubelet) removed in Kubernetes v1.24.
+
+The `containerd` CRI plugin is enabled by default and you can use containerd for Kubernetes while still allowing Docker to function.
+Recent Kubernetes versions will prefer containerd over Docker automatically on recent Flatcar versions.
+
+If you run `kubelet` in a Docker container, make sure it has access
+to the following directories on the host file system:
+- `/run/docker/libcontainerd/`
+- `/run/containerd/`
+- `/var/lib/containerd/`
+
+And that it has access to the following binaries on the host file system and that they are included in `PATH`:
+- For Flatcar releases until major version 3760:
+ - `/run/torcx/unpack/docker/bin/containerd-shim-runc-v1`
+ - `/run/torcx/unpack/docker/bin/containerd-shim-runc-v2`
+- For Flatcar releases above major version 3760:
+ - `/usr/bin/containerd-shim-runc-v1`
+ - `/usr/bin/containerd-shim-runc-v2`
+
+Finally, tell `kubelet` to use containerd by adding to it the following flags:
+- `--container-runtime=remote`
+- `--container-runtime-endpoint=unix:///run/containerd/containerd.sock`
+
+## Cluster API
+
+From the official [documentation][capi-documentation]:
+> Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
+
+As it requires to have some tools already installed on the OS to work correcly with CAPI, Flatcar images can be built using the [image-builder][image-builder] project.
+
+While CAPI is an evolving project and Flatcar support is in-progress regarding the various providers, here's the current list of supported providers:
+* [AWS][capi-aws]
+* [Azure][capi-azure]
+* [OpenStack][openstack]
+* [vSphere][capi-vsphere]
+
+## Kubespray
+
+Kubespray is an open-source project used to deploy production ready Kubernetes cluster, learn more about it on the [documentation][kubespray-documentation].
+
+Based on users feedback, Flatcar is known to work with Kubespray - you can read more about it in this section: [https://kubespray.io/#/docs/flatcar][kubespray-documentation-flatcar].
+
+[butane]: https://coreos.github.io/butane/
+[capi-documentation]: https://cluster-api.sigs.k8s.io/
+[capi-aws]: https://cluster-api-aws.sigs.k8s.io/
+[capi-azure]: https://capz.sigs.k8s.io/
+[capi-vsphere]: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/docs/ignition.md
+[cilium]: https://github.com/flatcar/mantle/pull/292
+[flatcar-779]: https://github.com/flatcar/Flatcar/issues/779
+[flatcar-891]: https://github.com/flatcar/Flatcar/issues/891
+[image-builder]: https://github.com/kubernetes-sigs/image-builder
+[kubenet]: https://github.com/flatcar/Flatcar/issues/579
+[kubespray-documentation]: https://kubespray.io
+[kubespray-documentation-flatcar]: https://kubespray.io/#/docs/flatcar
+[kured]: https://kured.dev/docs/
+[openstack]: https://cluster-api-openstack.sigs.k8s.io/clusteropenstack/configuration.html#ignition-based-images
+[sysext-bakery]: https://github.com/flatcar/sysext-bakery
+[typhoon]: https://typhoon.psdn.io/
diff --git a/content/docs/latest/container-runtimes/registry-authentication.md b/content/docs/latest/container-runtimes/registry-authentication.md
new file mode 100644
index 00000000..bbd9b714
--- /dev/null
+++ b/content/docs/latest/container-runtimes/registry-authentication.md
@@ -0,0 +1,242 @@
+---
+title: Authenticating to Container Registries
+description: Configuration examples for authenticating to different container registries.
+weight: 50
+aliases:
+ - ../os/registry-authentication
+ - ../clusters/management/registry-authentication
+---
+
+Many container image registries require authentication. This document explains how to configure container management software like Docker, Kubernetes, rkt, and Mesos to authenticate with and pull containers from registries like [Quay][quay-site] and [Docker Hub][docker-hub-site].
+
+## Using a Quay robot for registry auth
+
+The recommended way to authenticate container manager software with [quay.io][quay-site] is via a [Quay Robot][quay-robot]. The robot account acts as an authentication token with some nice features, including:
+
+* Readymade repository authentication configuration files
+* Credentials are limited to specific repositories
+* Choose from read, write, or admin privileges
+* Token regeneration
+
+![Quay Robot settings][quay-bot-img]
+
+Quay robots provide config files for Kubernetes, Docker, Mesos, and rkt, along with instructions for using each. Find this information in the **Robot Accounts** tab under your Quay user settings. For more information, see the [Quay robot documentation][quay-robot].
+
+## Manual registry auth setup
+
+If you are using a registry other than Quay (e.g., Docker Hub, Docker Store, etc) you will need to manually configure your credentials with your container-runtime or orchestration tool.
+
+### Docker
+
+The Docker client uses an interactive command to authenticate with a centralized service.
+
+```shell
+docker login -u -p https://registry.example.io
+```
+
+This command creates the file `$HOME/.docker/config.json`, formatted like the following example:
+
+**/home/core/.docker/config.json:**
+
+```json
+{
+ "auths": {
+ "https://index.docker.io/v1/": {
+ "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx="
+ },
+ "quay.io": {
+ "xxxx": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+ },
+ "https://registry.example.io/v0/": {
+ "auth": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx="
+ }
+ }
+}
+```
+
+On Flatcar Container Linux, this process can be automated by writing out the config file during system provisioning [with a Butane Config][butane-configs]. Since the config is written to the `core` user's home directory, ensure that your systemd units run as that user, by adding, e.g., `User=core`.
+
+Docker also offers the ability to configure a credentials store, such as your operating system's keychain. This is outlined in the [Docker login documentation][docker-login].
+
+### Kubernetes
+
+Kubernetes uses [*Secrets*][k8s-secrets] to store registry credentials.
+
+When manually configuring authentication with *any* registry in Kubernetes (including Quay and Docker Hub) the following command is used to generate the Kubernetes registry-auth secret:
+
+```shell
+$ kubectl create secret docker-registry my-favorite-registry-secret --docker-username=giffee_lover_93 --docker-password='passphrases are great!' --docker-email='giffee.lover.93@example.com' --docker-server=registry.example.io
+secret "my-favorite-registry-secret" created
+```
+
+If you prefer you can store this in a YAML file by adding the `--dry-run` and `-o yaml` flag to the end of your command and copying or redirecting the output to a file:
+
+```shell
+kubectl create secret docker-registry my-favorite-registry [...] --dry-run -o yaml | tee credentials.yaml
+```
+
+```yaml
+apiVersion: v1
+data:
+ .dockercfg: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
+kind: Secret
+metadata:
+ creationTimestamp: null
+ name: my-favorite-registry-secret
+type: kubernetes.io/dockercfg
+```
+
+```shell
+$ kubectl create -f credentials.yaml
+secret "my-favorite-registry-secret" created
+```
+
+You can check that this secret is loaded with with the `kubectl get` command:
+
+```shell
+$ kubectl get my-favorite-registry-secret
+NAME TYPE DATA AGE
+my-favorite-registry-secret kubernetes.io/dockercfg 1 30m
+```
+
+The secret can be used in a Pod spec with the `imagePullSecrets` variable:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: somepod
+ namespace: all
+spec:
+ containers:
+ - name: web
+ image: registry.example.io/v0/giffee_lover_93/somerepo
+
+ imagePullSecrets:
+ - name: my-favorite-registry-secret
+```
+
+For more information, check the [docker-registry Kubernetes secret][k8s-docker-registry] and [Kubernetes imagePullSecrets][k8s-image-pull] documentation.
+
+### rkt
+
+rkt stores registry-authentication in a JSON file stored in the directory `/etc/rkt/auth.d/`.
+
+`/etc/rkt/auth.d/registry.example.io.json`:
+
+```json
+{
+ "rktKind": "auth",
+ "rktVersion": "v1",
+ "domains": [
+ "https://registry.example.io/v0/"
+ ],
+ "type": "basic",
+ "credentials": {
+ "user": "giffeeLover93",
+ "password": "passphrases are great!"
+ }
+}
+```
+
+While you *can* embed your password in plaintext in this file, you should try using a disposable token instead. Check your registry documentation to see if it offers token-based authentication.
+
+Now rkt will authenticate with `https://registry.example.io/v0/` using the provided credentials to fetch images.
+
+For more information about rkt credentials, see the [rkt configuration docs][rkt-config].
+
+Just like with the Docker config, this file can be copied to `/etc/rkt/auth.d/registry.example.io.json` on a Flatcar Container Linux node during system provisioning with [a Butane Config][butane-configs].
+
+### Mesos
+
+Mesos uses a gzip-compressed archive of a `.docker/config.json` (directory and file) to access private repositories.
+
+Once you have followed the above steps to [create the docker registry auth config file][docker-instructions] create your Mesos configuration using `tar`:
+
+```shell
+tar cxf ~/.docker/config.json
+```
+
+The archive secret is referenced via the `uris` field in a container specification file:
+
+```json
+{
+ "id": "/some/name/or/id",
+ "cpus": 1,
+ "mem": 1024,
+ "instances": 1,
+ "container": {
+ "type": "DOCKER",
+ "docker": {
+ "image": "https://registry.example.io/v0/giffee_lover_93/some-image",
+ "network": "HOST"
+ }
+ },
+
+ "uris": [
+ "file:///path/to/registry.example.io.tar.gz"
+ ]
+}
+```
+
+More thorough information about configuring Mesos registry authentication can be found on the ['Using a Private Docker Registry'][mesos-registry] documentation.
+
+## Copying the config file with a Butane Config
+
+[Butane Configs][butane-configs] can be used to provision a Flatcar Container Linux node on first boot. Here we will use it to copy registry authentication config files to their appropriate destination on disk. This provides immediate access to your private Docker Hub and Quay image repositories without the need for manual intervention. The same Butane Config file can be used to copy registry auth configs onto an entire cluster of Flatcar Container Linux nodes.
+
+Here is an example of using a Butane Config to write the `.docker/config.json` registry auth configuration file mentioned above to the appropriate path on the Flatcar Container Linux node:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /home/core/.docker/config.json
+ mode: 0644
+ contents:
+ inline: |
+ {
+ "auths": {
+ "quay.io": {
+ "auth": "AbCdEfGhIj",
+ "email": "your.email@example.com"
+ }
+ }
+ }
+```
+
+Butane Configs can also download a file from a remote location and verify its integrity with a SHA512 hash:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /home/core/.docker/config.json
+ mode: 0644
+ contents:
+ source: http://internal.infra.example.com/cluster-docker-config.json
+ verification:
+ hash: sha512-0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
+```
+
+For details, check out the [Butane Config examples][butane-examples].
+
+[config-valid]: https://coreos.com/validate/
+[docker-hub-site]: https://hub.docker.com/
+[docker-instructions]: #docker
+[docker-login]: https://docs.docker.com/engine/reference/commandline/login/
+[docker-reg-v2]: https://docs.docker.com/registry/spec/auth/jwt/
+[k8s-docker-registry]: https://kubernetes.io/docs/user-guide/kubectl/kubectl_create_secret_docker-registry/
+[k8s-docker-registry]: https://kubernetes.io/docs/user-guide/kubectl/kubectl_create_secret_docker-registry/
+[k8s-image-pull]: https://kubernetes.io/docs/user-guide/images/
+[k8s-secrets]: https://kubernetes.io/docs/user-guide/secrets/
+[mesos-registry]: https://mesosphere.github.io/marathon/docs/native-docker-private-registry.html
+[quay-bot-img]: ../img/quay-robot-screen.png
+[quay-robot]: https://docs.quay.io/glossary/robot-accounts.html
+[quay-site]: https://quay.io/
+[rfc-2397]: https://tools.ietf.org/html/rfc2397
+[rkt-config]: registry-authentication/#rkt
+[butane-configs]: ../provisioning/config-transpiler
+[butane-examples]: ../provisioning/config-transpiler/examples
diff --git a/content/docs/latest/container-runtimes/switching-to-unified-cgroups.md b/content/docs/latest/container-runtimes/switching-to-unified-cgroups.md
new file mode 100644
index 00000000..c68a7c98
--- /dev/null
+++ b/content/docs/latest/container-runtimes/switching-to-unified-cgroups.md
@@ -0,0 +1,153 @@
+---
+title: Switching to Unified Cgroups
+linktitle: Switching to unified cgroups
+description: Overview of changes necessary to use unified cgroups with Kubernetes
+weight: 20
+aliases:
+---
+
+Beginning with Flatcar version 2969.0.0, Flatcar Linux has migrated to the unified
+cgroup hierarchy (aka cgroup v2). Much of the container ecosystem has already
+moved to default to cgroup v2. Cgroup v2 brings exciting new features in
+areas such as eBPF and rootless containers.
+
+Flatcar nodes deployed prior to this change will be kept on cgroups v1 (legacy
+hierarchy) and will require manual migration. During an update from an older
+Flatcar version, a post update script does two things:
+
+* adds the kernel command line parameters `systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller`
+ to `/usr/share/oem/grub.cfg`
+* creates a systemd drop-in unit at `/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf` that
+ configures `containerd` to keep using cgroupfs for cgroups.
+
+# Migrating old nodes to unified cgroups
+
+To undo the changes performed by the post update script, execute the following commands as root (or using `sudo`):
+
+```bash
+rm /etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf
+sed -i -e '/systemd.unified_cgroup_hierarchy=0/d' /usr/share/oem/grub.cfg
+sed -i -e '/systemd.legacy_systemd_cgroup_controller/d' /usr/share/oem/grub.cfg
+reboot
+```
+
+# Starting new nodes with legacy cgroups
+
+Nodes deployed with the release incorporating the described changes use cgroups v2 by default. To revert to cgroups v1 on new
+nodes during provisioning, use the following Ignition snippet (here as Butane YAML to be transpiled to Ignition JSON):
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - name: "OEM"
+ mount:
+ device: "/dev/disk/by-label/OEM"
+ format: "btrfs"
+kernel_arguments:
+ should_exist:
+ - systemd.unified_cgroup_hierarchy=0
+ - systemd.legacy_systemd_cgroup_controller
+systemd:
+ units:
+ - name: containerd.service
+ dropins:
+ - name: 10-use-cgroupfs.conf
+ contents: |
+ [Service]
+ Environment=CONTAINERD_CONFIG=/usr/share/containerd/config-cgroupfs.toml
+```
+
+However, the kernel commandline setting doesn't take effect on the first boot, and a reboot is required before the snippet becomes active.
+
+If your deployment can't tolerate the required reboot, consider using the following snippet to switch to legacy cgroups without a reboot. This is supported by Flatcar 3033.2.4 or newer:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: "/dev/disk/by-label/OEM"
+ format: "btrfs"
+ files:
+ - path: /etc/flatcar-cgroupv1
+ mode: 0444
+kernel_arguments:
+ should_exist:
+ - systemd.unified_cgroup_hierarchy=0
+ - systemd.legacy_systemd_cgroup_controller
+systemd:
+ units:
+ - name: containerd.service
+ dropins:
+ - name: 10-use-cgroupfs.conf
+ contents: |
+ [Service]
+ Environment=CONTAINERD_CONFIG=/usr/share/containerd/config-cgroupfs.toml
+```
+
+Beware that over time it is expected that upstream projects will drop support for cgroups v1.
+
+**Known issues:** Unprivileged containers with user namespaces may lack permissions to access the bind mount ([Flatcar#722](https://github.com/flatcar/Flatcar/issues/722)) and unmounting the bind mount before starting the container is needed.
+
+## Generate AWS EC2 cgroups v1 AMIs
+
+The [`create_cgroupv1_ami.sh` script](https://raw.githubusercontent.com/flatcar/flatcar-docs/main/create_cgroupv1_ami.sh) performs the image modification as outlined above for you to upload your own Flatcar AMI that directly boots into cgroup v1 without needing an additonal reboot.
+
+# Kubernetes
+
+The unified cgroup hierarchy is supported starting with Docker v20.10 and
+Kubernetes v1.19. Users that need to run older version will need to revert to
+cgroups v1, but are urged to find a migration path. Flatcar now ships with Docker
+v20.10, older versions can be deployed following the instructions on [running custom docker versions](use-a-custom-docker-or-containerd-version).
+
+Flatcar nodes that had Kubernetes deployed on them before the introduction of
+cgroups v2 should be careful when migrating. Depending on the deployment method,
+the `cgroupfs` cgroup driver may be hardcoded in the `kubelet` configuration.
+Cgroups v2 are only supported with the `systemd` cgroup driver. See [configuring a cgroup driver][kube-cgroup-docs] in the Kubernetes documentation for a discussion of cgroup drivers and how to migrate nodes. We recommend redeploying Kubernetes on fresh nodes instead of migrating inplace.
+
+The cgroup driver used by `kubelet` should be the same as the one used by `docker` daemon. `docker` defaults to `systemd` cgroup driver when started on a system running cgroup v2 and `cgroupfs` when running on a system with cgroup v1. The cgroup driver can be explicitly configured for `docker` by extending `/etc/docker/daemon.json`:
+```json
+{
+ "exec-opts": ["native.cgroupdriver=systemd"]
+}
+```
+or adding a `docker.service` drop-in at `/etc/systemd/system/docker.service.d/10-cgroup-v2.conf`:
+```ini
+[Service]
+Environment="DOCKER_CGROUPS=--exec-opt native.cgroupdriver=systemd"
+```
+
+## Container Runtimes
+
+When deploying Kubernetes through `kubeadm`, the default container runtime on Flatcar is currently `dockershim`. In this setup, `kubelet` talks to `dockershim`, which talks to `docker`, which interfaces with `containerd`. The `SystemdCgroup` setting in `containerd`'s `config.toml` is ignored. `docker`'s cgroup driver and `kubelet` cgroup driver settings must match. Starting with Kubernetes v1.22, `kubeadm` will default to the `systemd` `cgroupDriver` setting if no setting is provided explicitly. Out of the box, Flatcar defaults are compatible with Docker and Kubernetes defaults - everything will use `systemd` cgroup driver.
+
+When using `kubeadm`, add the snippet to your `kubeadm-config.yaml` to configure the `kubelet` cgroup driver:
+
+```yaml
+---
+kind: KubeletConfiguration
+apiVersion: kubelet.config.k8s.io/v1beta1
+cgroupDriver: systemd
+```
+
+## Containerd
+
+If users choose the `containerd` runtime, they must ensure that `containerd`'s setting for `SystemdCgroup` is consistent with `kubelet` and `docker` settings. Flatcar enables `SystemdCgroup` by default for `containerd`. Users may change the setting to suit their deployment.
+If you maintain your own containerd configuration or did follow the instructions on
+[how to customize containerd configuration](customizing-docker), you should add the relevant lines to your `config.toml`:
+```toml
+version = 2
+
+[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
+ # setting runc.options unsets parent settings
+ runtime_type = "io.containerd.runc.v2"
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
+ SystemdCgroup = true
+ ```
+
+For a more detailed discussion of container runtimes, see the [Kubernetes documentation][kube-runtime-docs].
+
+[kube-cgroup-docs]: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/#migrating-to-the-systemd-driver
+[kube-runtime-docs]: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
diff --git a/content/docs/latest/container-runtimes/use-a-custom-docker-or-containerd-version.md b/content/docs/latest/container-runtimes/use-a-custom-docker-or-containerd-version.md
new file mode 100644
index 00000000..9a2a1f06
--- /dev/null
+++ b/content/docs/latest/container-runtimes/use-a-custom-docker-or-containerd-version.md
@@ -0,0 +1,155 @@
+---
+title: Using a custom Docker or containerd version (LEGACY)
+linktitle: Using custom versions
+description: How to download and run a different version of docker or containerd than the one shipped by Flatcar.
+weight: 30
+aliases:
+ - ../os/use-a-custom-docker-or-containerd-version
+---
+
+Some system tooling can't be run on Container Linux via containers and this is especially true for the container runtime itself.
+As with other special binaries you want to bring to the system you can use an Ignition config that downloads the binaries.
+Starting from Flatcar version ≥ 3185.0.0 a [systemd-sysext images](../provisioning/sysext/) should be used instead of the below.
+
+For custom Docker/containerd binaries sysext images are the recommended way.
+However, the Flatcar versions below 3185.0.0 don't support it yet, and even in case support is there you may find it too complicated to build a sysext image and host it elsewhere.
+In this case you can directly place the custom binaries to `/opt/bin/` as done by the following Butane Config which you can transpile to an Ignition config with [`butane`](../provisioning/config-transpiler/).
+
+This replicates the Docker setup as of Flatcar Container Linux 3033.2.3 but under `/etc` and `/opt/bin/`, and with additional support for the upstream Containerd socket location.
+You can modify it to use different socket paths or plugins, or even only ship `containerd` if you don't need Docker.
+
+```
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: prepare-docker.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Unpack docker binaries to /opt/bin
+ ConditionPathExists=!/opt/bin/docker
+ [Service]
+ Type=oneshot
+ RemainAfterExit=true
+ Restart=on-failure
+ ExecStartPre=/usr/bin/mkdir -p /opt/bin
+ ExecStartPre=/usr/bin/tar -v --extract --file /opt/docker.tgz --directory /opt/ --no-same-owner
+ ExecStartPre=/usr/bin/rm /opt/docker.tgz
+ ExecStartPre=/usr/bin/sh -c "mv /opt/docker/* /opt/bin/"
+ ExecStart=/usr/bin/rmdir /opt/docker
+ [Install]
+ WantedBy=multi-user.target
+ - name: docker.socket
+ enabled: true
+ contents: |
+ [Unit]
+ PartOf=docker.service
+ Description=Docker Socket for the API
+ [Socket]
+ ListenStream=/var/run/docker.sock
+ SocketMode=0660
+ SocketUser=root
+ SocketGroup=docker
+ [Install]
+ WantedBy=sockets.target
+ - name: docker.service
+ enabled: false
+ contents: |
+ [Unit]
+ Description=Docker Application Container Engine
+ After=containerd.service docker.socket network-online.target prepare-docker.service
+ Wants=network-online.target
+ Requires=containerd.service docker.socket prepare-docker.service
+ [Service]
+ Type=notify
+ EnvironmentFile=-/run/flannel/flannel_docker_opts.env
+ Environment=DOCKER_SELINUX=--selinux-enabled=true
+ # the default is not to use systemd for cgroups because the delegate issues still
+ # exists and systemd currently does not support the cgroup feature set required
+ # for containers run by docker
+ Environment=PATH=/opt/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ ExecStart=/opt/bin/dockerd --host=fd:// --containerd=/run/docker/libcontainerd/docker-containerd.sock $DOCKER_SELINUX $DOCKER_OPTS $DOCKER_CGROUPS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
+ ExecReload=/bin/kill -s HUP $MAINPID
+ LimitNOFILE=1048576
+ # Having non-zero Limit*s causes performance problems due to accounting overhead
+ # in the kernel. We recommend using cgroups to do container-local accounting.
+ LimitNPROC=infinity
+ LimitCORE=infinity
+ # Uncomment TasksMax if your systemd version supports it.
+ # Only systemd 226 and above support this version.
+ TasksMax=infinity
+ TimeoutStartSec=0
+ # set delegate yes so that systemd does not reset the cgroups of docker containers
+ Delegate=yes
+ # kill only the docker process, not all processes in the cgroup
+ KillMode=process
+ # restart the docker process if it exits prematurely
+ Restart=on-failure
+ StartLimitBurst=3
+ StartLimitInterval=60s
+ [Install]
+ WantedBy=multi-user.target
+ - name: containerd.service
+ enabled: false
+ contents: |
+ [Unit]
+ Description=containerd container runtime
+ After=network.target prepare-docker.service
+ Requires=prepare-docker.service
+ [Service]
+ Delegate=yes
+ Environment=PATH=/opt/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ ExecStartPre=mkdir -p /run/docker/libcontainerd
+ ExecStartPre=ln -fs /run/containerd/containerd.sock /run/docker/libcontainerd/docker-containerd.sock
+ ExecStart=/opt/bin/containerd --config /etc/containerd/config.toml
+ KillMode=process
+ Restart=always
+ # (lack of) limits from the upstream docker service unit
+ LimitNOFILE=1048576
+ LimitNPROC=infinity
+ LimitCORE=infinity
+ TasksMax=infinity
+ [Install]
+ WantedBy=multi-user.target
+storage:
+ files:
+ - path: /etc/systemd/system-generators/torcx-generator
+ - path: /opt/docker.tgz
+ mode: 0644
+ contents:
+ source: https://download.docker.com/linux/static/stable/x86_64/docker-20.10.12.tgz
+ verification:
+ hash: sha512-90c3ab8c465bfa6fa51e9e77cf5257ff4bf139723eeb4878afbf294e71a2f2f13558840708e392ff24f8b8853c519938013d4dff8d50b17d66ca0eeb6a1b3c1a
+ - path: /etc/containerd/config.toml
+ mode: 0644
+ contents:
+ inline: |
+ version = 2
+ # set containerd's OOM score
+ oom_score = -999
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
+ # setting runc.options unsets parent settings
+ runtime_type = "io.containerd.runc.v2"
+ [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
+ SystemdCgroup = true
+ links:
+ - path: /etc/extensions/docker-flatcar.raw
+ target: /dev/null
+ overwrite: true
+ - path: /etc/extensions/containerd-flatcar.raw
+ target: /dev/null
+ overwrite: true
+```
+
+While the system services have a `PATH` variable that prefers `/opt/bin/` by placing it first, you have to run the following command on every interactive login shell (also after `sudo` or `su`) to make sure you use the correct binaries.
+
+```sh
+export PATH="/opt/bin:$PATH"
+```
+
+The empty file `/etc/systemd/system-generators/torcx-generator` serves the purpose of disabling Torcx to make sure it is not used accidentally in case `/opt/bin` was missing from the `PATH` variable.
+Flatcar releases newer than major release 3760 do not ship torcx so that line can as well be removed from the above config.
+However, leaving it in does not have any side effects.
+
+The `/etc/extensions/` symlinks make sure that the future built-in Docker/containerd sysext images won't be enabled.
diff --git a/content/docs/latest/contribute/_index.md b/content/docs/latest/contribute/_index.md
new file mode 100644
index 00000000..a4902e12
--- /dev/null
+++ b/content/docs/latest/contribute/_index.md
@@ -0,0 +1,58 @@
+---
+title: How to Contribute
+content_type: contribute
+weight: 130
+---
+
+Flatcar documentation is released under the [Apache 2.0 License][asl], and we welcome contributions. Check out the [help-wanted tag][help-wanted] in this project's Issues list for good places to start participating.
+
+Submit fixes and additions in the form of [GitHub *Pull Requests* (PRs)][pull-requests]. The general process is the typical git fork-branch-PR-review-merge cycle:
+
+1. Fork this repository into your GitHub account
+2. Make changes in a topic branch or your fork's `master`
+3. Send a Pull Request from that topic branch to flatcar-linux/docs
+4. Maintainers will review the PR and either merge it or make comments
+
+Cognizance of the tribal customs described and linked to below will help get your contributions incorporated with the greatest of ease.
+
+## Clear commit messages
+
+Commit messages follow a format that makes clear **what** changed and **why** it changed. The first line of each commit message should clearly state what module or file changed, summarize the change very briefly, and should end, without a period, somewhere short of 70 characters. After a blank line, the body of the commit message should then explain why the change was needed, with lines wrapped at 72 characters wide and sentences normally punctuated. Cite related issues or previous revisions as appropriate. For example:
+
+```
+ignition: Update etcd example to use %m
+
+Make the etcd configuration example use ignition's %m instead of the
+ETCD_NAME environment variable. Fixes #123.
+```
+
+This format can be described somewhat more formally as:
+
+```
+:
+
+
+
+[]
+```
+
+Where the optional `[]` might include `signed-off-by` lines and other metadata.
+
+## Style guide
+
+The [style guide][style] prescribes the conventions of formatting and English style preferred in Flatcar project documentation.
+
+## Translations
+
+We happily accept accurate translations. Please send the documents as a pull request and follow two guidelines:
+
+1. Name the files identically to the originals, but put them beneath a directory named for the translation's `gettext` locale. For example: `JA_JP/doc`, `ZH_CN/doc,` or `KO_KN/doc`.
+
+2. Add an explanation about the translated document to the top of the file: "These documents were translated into Esperanto by Community Member and last updated on 2015-12-01. If you find inaccuracies or problems please file an issue on GitHub."
+
+
+[asl]: https://github.com/flatcar/flatcar-docs/blob/main/LICENSE
+[flatcar-docs]: https://flatcar.org/docs/latest/
+[help-wanted]: https://github.com/flatcar/Flatcar/labels/kind%2Fdocs
+[pull-requests]: https://help.github.com/articles/using-pull-requests/
+[style]: docs
diff --git a/content/docs/latest/contribute/docs.md b/content/docs/latest/contribute/docs.md
new file mode 100644
index 00000000..0f8a0912
--- /dev/null
+++ b/content/docs/latest/contribute/docs.md
@@ -0,0 +1,262 @@
+---
+title: Documentation Style and Formatting
+linktitle: Docs Style & Formatting
+weight: 10
+aliases:
+ - ../os/docs
+---
+
+## English style
+
+Write short sentences. Organize concepts in paragraphs. Prefer lists to tables and paragraphs to lists. Write in the active voice. Avoid jargon beyond the requirements of subject and audience.
+
+### Eschew you
+
+You write unambiguous documentation, so you avoid the second person. Avoiding personal pronouns in general helps produce the imperative impersonal tone desired for documentation. Don't reboot your system or have the user reboot their system. Reboot the system.
+
+### Generalities
+
+There are a few other common ways to write or not write things:
+
+* Expand acronyms on their introduction in a document, with the short form following in parentheses: Trusted Platform Module (TPM).
+* Terms of art that are not commands or other literal text should often be italicized on their first appearance in a document: *Kubernetes* is a good example.
+* The [hyphen is overused and most English compounds do not require it][economist-hyphens].
+* There is one space (` `) after a period (aka *full stop*, `.`), comma (`,`), semicolon (`;`) and other marks of punctuation.
+
+### Specifics
+
+There are a few prescribed ways of writing frequently questioned words and phrases:
+
+* The singular possessive form of CoreOS is *CoreOS's*. *CoreOS's mission is to secure the infrastructure that powers the Internet.*
+* Deployments may occur *on-premises*, sometimes "on-prem," but never on-premise. A *premises* is a place. A *premise* is a proposition.
+* *GIFEE* was formerly *Google's Infrastructure for Everyone Else*, but now it is *Google's Infrastructure for EveryonE*.
+
+#### Project names are proper nouns
+
+Project names are proper nouns written with an initial capital letter. Examples include Ignition, Dex, and Matchbox.
+
+The Linux distribution is called Flatcar Container Linux.
+
+These capitalization rules are traditional and arcane. They should eventually give way to all project and product names being capitalized as proper nouns, except when given literally, e.g., `rkt run docker://nginx` or `/var/lib/rkt`.
+
+## Unix style: Command line grammar
+
+*Commands* *invoke* or *execute* programs. Commands *take* *arguments* and *accept* *options*, which themselves may be *set* to *values*.
+
+### Example: Documenting `echo(1)`
+
+In this simple command line:
+
+```sh
+$ echo -n Example
+Example
+```
+
+`echo` is the command, and `Example` is the argument. The option `-n` suppresses the terminating newline usually emitted by `echo`. A binary option represented by a single letter, like `-n`, is sometimes called a *flag*. The `echo(1)` command prints its argument on the standard output, and a good shell excerpt often includes the expected output of commands, as shown here. The shell prompt character `$` distinguishes input from output.
+
+### Example: Documenting subcommands
+
+Some command lines are more complex. Many commands operate through a set of *subcommands*. `rkt` and several other relevant programs follow this pattern.
+
+```sh
+$ rkt run --debug example.aci
+[...]
+```
+
+In this case the argument to `rkt`, `run`, is a subcommand. `run` in turn accepts the `--debug` option to modify how it executes the ACI image specified by its own argument, `example.aci`
+
+### Example: Documenting long command lines
+
+Some commands pack many subcommands, arguments, and options on a single line. It is good practice to break such long command lines with newlines, escaped with backslash (`\`), because lines inside code blocks are not soft-wrapped in most presentations. For very long command lines, choose points that break the parameters into logical groups. Lines so wrapped are not indented for vertical alignment.
+
+```sh
+$ docker run --name docsbuilder \
+-i -t \
+-p 80:9001 -p 443:9443 \
+-v /home/core/site:/app:rw \
+-v /etc/ssl/certs:/etc/ssl/certs:ro \
+quay.io/coreosinc/coreos-pages-builder scripts/deploy stage
+```
+
+### Comment conventions
+
+Add comments inline if possible, and before the referenced line of code if not.
+
+```yaml
+staticPasswords:
+- email: "admin@example.com"
+ # bcrypt hash of the string "password".
+ hash: "$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W"
+ username: "admin" # username to display. NOT used during login.
+ userID: "08a8684b-db88-4b73-90a9-3cd1661f5466"
+```
+
+### Placeholder conventions
+
+Use these standard example entities to avoid exposing real URLs, IP Addresses, or other data.
+
+* URL: [example.com][rfc2606s3]
+* IP Address: [Any in the range 203.0.113.0/24][rfc5737]
+
+## Source formatting
+
+Flatcar documentation is written in [Markdown][mdhome], a simple way to annotate text to indicate presentation typesetting. Markdown source is intended to be a plain text human-readable version of the document, even before conversion to HTML for the browser or other display.
+
+### Source file naming and encoding
+
+Write Markdown source in UTF-encoded plain text files, named with a reasonable, lower case short form of the document's title, and suffixed with ``. Prefer hyphens to underscores in file names with two or more words. For example, instructions for DNS configuration are written to a file named [`configuring-dns`][configuring-dns].
+
+### Line wrapping considered harmful
+
+Don't wrap long lines of text with manual newlines. Line wrapping churns prose documents, because lines not actually edited will nevertheless change when a paragraph is edited and rewrapped.
+
+### One sentence per line deprecated
+
+Do not add a line break between sentences. Write natural English paragraphs, separated by a single blank line. Writing Markdown source with a newline between every sentence is acceptable to most compilers and can ease change review. However, this format makes the document less readable in source form.
+
+### Preferred markdown symbols
+
+Markdown defines two or more ways to declare some document structures. This documentation prefers these Markdown symbols among their alternatives:
+
+* Headings are denoted in Markdown's ATX style, with hash character(s): `#`. See [*Headings*][headings], below.
+* Bulleted lists, like this one, are denoted with the asterisk (`*`), rather than the hyphen.
+* Hyperlink URLs are given in the reference style (`[hyperlinked text][label]`), rather than inline. Hyperlink labels are defined in one list at the end of the document. Relative links are preferred to absolute links. See [*Hyperlink Considerations*][hyperlink-considerations], below.
+* *Italic text* is wrapped with a pair of single asterisks: `*Italics*`; **Bold** with a double pair: `**Bold**`.
+* `Monospace` is indicated between a pair of backticks. This distinguishes literal strings like command names, file paths, or values, e.g., `/bin/markdown`. See [*Command Line Grammar*][command-line-grammar], below.
+* Longer code blocks or file contents are *fenced*: Set off on new lines between pairs of three backticks, rather than indented. A presentation hint specifying the block's language can be given immediately after the opening three backticks, e.g., ````yaml`.
+
+## Headings
+
+By convention, the level one heading, denoted in Markdown by a single hash character (`#`), is the document's title. This document's title is *Documentation style and formatting*.
+
+### Heading style
+
+Each heading is both preceded and followed by a newline. A space separates the Markdown symbols from the heading text. Headings are typed in *Sentence case*, capitalizing the first letter of the first word, but other words only as they would be capitalized if appearing in the middle of a sentence.
+
+### Heading semantics and the sidebar outline
+
+Section headings expose the document's logical structure with a notation of incrementing hash marks (`#[#][...]`) for increasingly nested levels of a hierarchy. With the level one heading devoted to the document title, the second-level headings represent the document's primary concepts.
+
+The site deployment process inspects a document's headings to derive the thumb index outlines seen in the right sidebar of [documentation viewed at docs.flatcar-linux.org][flatcar-docs].
+
+#### Example: This document's source
+
+The abridged skeletal markdown source for this document's headings:
+
+```
+# Documentation style and formatting
+
+## English style
+
+### Eschew you
+
+[...]
+
+## Headings
+
+### Heading style
+
+[...]
+
+## Unix style: Command line grammar
+
+### Example: Documenting `echo(1)`
+
+[...]
+
+## Hyperlink considerations
+
+### Naming
+
+### Marking down the link
+
+#### Example: Reference-style hyperlinking
+
+[...]
+
+## File name extension conventions
+```
+
+### Example: The "average" document
+
+Most documents have a single `h1` (`#`) heading matching the title, two to five `h2` (`##`) headings representing the topic's primary concepts, and one or two `h3` (`###`) and `h4` (`####`) headings organizing details beneath each `h2`.
+
+If a document proves a great deal longer or more structurally complex than those simplistic rules of thumb, there should be a good reason.
+
+![headings styles](Styles.png)
+
+## Hyperlink considerations
+
+### Naming
+
+Name hyperlinks carefully to give them maximum context. For example, note that certain information is in the [style guide][style], rather than just pointing lazily to the style guide [here][style]. The link text "here" gives almost no information about its target. It is helpful to [write a clear sentence][eos] first, then bracket the choice words within to declare them a hyperlink.
+
+### Marking down the link
+
+As mentioned above, the reference style of Markdown hyperlinking is preferred to the inline. Hyperlinks are marked with two pairs of square brackets, the first enclosing the hyperlinked text, the second enclosing a label for the link. Labels are in turn associated with a target URL in a list of declarations at the end of the document. Each label declaration consists of a line beginning with the bracket-enclosed label, a colon, and the target URL (the `href` in HTML). The target URL may optionally be followed by a link title in double quotes. The list of link label declarations should be sorted alphabetically.
+
+#### Example: Reference-style hyperlinking
+
+```markdown
+The reference style of [Markdown hyperlinks][mdlinks] allows for easier
+reading of source and formalizes the declaration of links.
+
+Another paragraph may reference the [project introduction][readme],
+which link will likewise have its label defined at the document's foot.
+
+[mdlinks]: http://daringfireball.net/projects/markdown/syntax#link "Markdown link syntax"
+[readme]: README
+```
+
+#### Relative URLs preferred
+
+Using relative URLs where possible helps portability among multiple presentation targets, as they remain valid even as the site root moves. Absolute linking is obviously necessary for resources external to the document's repository and/or the docs.flatcar-linux.org domain.
+
+For example, there are two ways to refer to the [Flatcar quick start guide][quickstart]'s location. The preferred way is a relative link from the current file's path to the target, which from this document is `os/quickstart`. An absolute link to the complete URL is less flexible, and more verbose: `https://github.com/flatcar/flatcar-docs/blob/master/os/quickstart.md`.
+
+#### Hyperlink deployment automation
+
+CoreOS documents have two major publication targets: the [docs.flatcar-linux.org documentation][flatcar-docs], and [GitHub's Markdown presentation][githubmd]. The deployment scripts used to build the CoreOS site handle some of the wrinkles arising between the two targets. These scripts expect links to other CoreOS project documentation to refer to the Markdown source; that is, to end with the `.md` file extension. The deployment scripts rewrite hyperlinks to replace that extension with `.html` for presentation. This allows the links to be valid in either context. External links are not rewritten.
+
+## Example: Documenting code blocks
+
+Insert triple backtick (grave accent) characters on a new line before and after a block of code. A tag, such as `yaml`, `sh`, `json`, or `ini`, can be placed after the opening backticks to declare the language in the block. Markdown syntax is not interpreted within the gated code block, but special characters are replaced with HTML entities.
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: etcd-client
+spec:
+ ports:
+ - name: etcd-client-port
+ port: 2379
+ protocol: TCP
+ targetPort: 2379
+ selector:
+ app: etcd
+```
+
+View this document's source to see the Markdown that generates the code block above.
+
+## File name extension conventions
+
+Some file types are commonly identified with more than one file name extension. For example, YAML is usually stored in files whose names end in either `.yml`, or `.yaml`. For the sake of consistency, use the file name extension designated in the following list when referring to or creating files of any of the listed types in CoreOS projects and their documentation.
+
+* YAML: `file.yaml` is preferred to `file.yml`
+* HTML: `file.html`, not `file.htm`
+
+
+[command-line-grammar]: #command-line-grammar
+[configuring-dns]: os/configuring-dns
+[flatcar-docs]: https://docs.flatcar-linux.org/
+[economist-hyphens]: http://www.economist.com/news/books-and-arts/21723088-hyphens-can-be-tricky-they-need-not-drive-you-crazy-hysteria-over-hyphens
+[eos]: https://faculty.washington.edu/heagerty/Courses/b572/public/StrunkWhite.pdf "The Elements of Style"
+[githubmd]: https://help.github.com/articles/github-flavored-markdown/
+[headings]: #headings
+[hyperlink-considerations]: #hyperlink-considerations
+[mdhome]: https://daringfireball.net/projects/markdown/syntax
+[quickstart]: os/quickstart "Relative link from here to CoreOS Quick Start"
+[rfc2606s3]: https://tools.ietf.org/html/rfc2606#section-3
+[rfc5737]: https://tools.ietf.org/html/rfc5737
+[style]: docs "CoreOS Documentation Style"
diff --git a/content/docs/latest/img/cloudca-addinstance.png b/content/docs/latest/img/cloudca-addinstance.png
new file mode 100644
index 00000000..1cfc27cb
Binary files /dev/null and b/content/docs/latest/img/cloudca-addinstance.png differ
diff --git a/content/docs/latest/img/cloudca-addinstance_step1.png b/content/docs/latest/img/cloudca-addinstance_step1.png
new file mode 100644
index 00000000..6ba3ea4b
Binary files /dev/null and b/content/docs/latest/img/cloudca-addinstance_step1.png differ
diff --git a/content/docs/latest/img/cloudca-addinstance_step2.png b/content/docs/latest/img/cloudca-addinstance_step2.png
new file mode 100644
index 00000000..2f6611b9
Binary files /dev/null and b/content/docs/latest/img/cloudca-addinstance_step2.png differ
diff --git a/content/docs/latest/img/cloudca-addinstance_step3.png b/content/docs/latest/img/cloudca-addinstance_step3.png
new file mode 100644
index 00000000..76d2d022
Binary files /dev/null and b/content/docs/latest/img/cloudca-addinstance_step3.png differ
diff --git a/content/docs/latest/img/cloudca-addinstance_step4.png b/content/docs/latest/img/cloudca-addinstance_step4.png
new file mode 100644
index 00000000..dbb38ac8
Binary files /dev/null and b/content/docs/latest/img/cloudca-addinstance_step4.png differ
diff --git a/content/docs/latest/img/cloudca-apiinfo.png b/content/docs/latest/img/cloudca-apiinfo.png
new file mode 100644
index 00000000..fc4bf222
Binary files /dev/null and b/content/docs/latest/img/cloudca-apiinfo.png differ
diff --git a/content/docs/latest/img/cloudca-getapiinfo.png b/content/docs/latest/img/cloudca-getapiinfo.png
new file mode 100644
index 00000000..5a5e35e4
Binary files /dev/null and b/content/docs/latest/img/cloudca-getapiinfo.png differ
diff --git a/content/docs/latest/img/cloudca-instance_detail.png b/content/docs/latest/img/cloudca-instance_detail.png
new file mode 100644
index 00000000..4832d321
Binary files /dev/null and b/content/docs/latest/img/cloudca-instance_detail.png differ
diff --git a/content/docs/latest/img/ct-workflow.svg b/content/docs/latest/img/ct-workflow.svg
new file mode 100644
index 00000000..429a207b
--- /dev/null
+++ b/content/docs/latest/img/ct-workflow.svg
@@ -0,0 +1 @@
+ct-workflow
\ No newline at end of file
diff --git a/content/docs/latest/img/dev.jpg b/content/docs/latest/img/dev.jpg
new file mode 100644
index 00000000..0cc0e34f
Binary files /dev/null and b/content/docs/latest/img/dev.jpg differ
diff --git a/content/docs/latest/img/dev.png b/content/docs/latest/img/dev.png
new file mode 100644
index 00000000..f943f7bc
Binary files /dev/null and b/content/docs/latest/img/dev.png differ
diff --git a/content/docs/latest/img/exoscale-size.png b/content/docs/latest/img/exoscale-size.png
new file mode 100644
index 00000000..d2a398aa
Binary files /dev/null and b/content/docs/latest/img/exoscale-size.png differ
diff --git a/content/docs/latest/img/exoscale-template.png b/content/docs/latest/img/exoscale-template.png
new file mode 100644
index 00000000..073427f7
Binary files /dev/null and b/content/docs/latest/img/exoscale-template.png differ
diff --git a/content/docs/latest/img/exoscale-userdata.png b/content/docs/latest/img/exoscale-userdata.png
new file mode 100644
index 00000000..a9ffb875
Binary files /dev/null and b/content/docs/latest/img/exoscale-userdata.png differ
diff --git a/content/docs/latest/img/gcl-deployed.png b/content/docs/latest/img/gcl-deployed.png
new file mode 100644
index 00000000..b9767a96
Binary files /dev/null and b/content/docs/latest/img/gcl-deployed.png differ
diff --git a/content/docs/latest/img/gcl-deploying.png b/content/docs/latest/img/gcl-deploying.png
new file mode 100644
index 00000000..fb7d7f45
Binary files /dev/null and b/content/docs/latest/img/gcl-deploying.png differ
diff --git a/content/docs/latest/img/gcl-landingpage.png b/content/docs/latest/img/gcl-landingpage.png
new file mode 100644
index 00000000..6978d110
Binary files /dev/null and b/content/docs/latest/img/gcl-landingpage.png differ
diff --git a/content/docs/latest/img/gcl-launcherconfig.png b/content/docs/latest/img/gcl-launcherconfig.png
new file mode 100644
index 00000000..9eada3d0
Binary files /dev/null and b/content/docs/latest/img/gcl-launcherconfig.png differ
diff --git a/content/docs/latest/img/gcl-ssh.png b/content/docs/latest/img/gcl-ssh.png
new file mode 100644
index 00000000..6ce384be
Binary files /dev/null and b/content/docs/latest/img/gcl-ssh.png differ
diff --git a/content/docs/latest/img/ikoula-deploy-instance-menu.png b/content/docs/latest/img/ikoula-deploy-instance-menu.png
new file mode 100755
index 00000000..03954422
Binary files /dev/null and b/content/docs/latest/img/ikoula-deploy-instance-menu.png differ
diff --git a/content/docs/latest/img/ikoula-instance-deployed.png b/content/docs/latest/img/ikoula-instance-deployed.png
new file mode 100755
index 00000000..7707c9a7
Binary files /dev/null and b/content/docs/latest/img/ikoula-instance-deployed.png differ
diff --git a/content/docs/latest/img/ikoula-login.png b/content/docs/latest/img/ikoula-login.png
new file mode 100755
index 00000000..dd22bab0
Binary files /dev/null and b/content/docs/latest/img/ikoula-login.png differ
diff --git a/content/docs/latest/img/ikoula-public-cloud.png b/content/docs/latest/img/ikoula-public-cloud.png
new file mode 100755
index 00000000..3c17df5c
Binary files /dev/null and b/content/docs/latest/img/ikoula-public-cloud.png differ
diff --git a/content/docs/latest/img/ikoula-subscriptions.png b/content/docs/latest/img/ikoula-subscriptions.png
new file mode 100755
index 00000000..f2723047
Binary files /dev/null and b/content/docs/latest/img/ikoula-subscriptions.png differ
diff --git a/content/docs/latest/img/image.png b/content/docs/latest/img/image.png
new file mode 100644
index 00000000..8a869c31
Binary files /dev/null and b/content/docs/latest/img/image.png differ
diff --git a/content/docs/latest/img/laptop.jpg b/content/docs/latest/img/laptop.jpg
new file mode 100644
index 00000000..15df8bd7
Binary files /dev/null and b/content/docs/latest/img/laptop.jpg differ
diff --git a/content/docs/latest/img/laptop.png b/content/docs/latest/img/laptop.png
new file mode 100644
index 00000000..e992d832
Binary files /dev/null and b/content/docs/latest/img/laptop.png differ
diff --git a/content/docs/latest/img/prod.jpg b/content/docs/latest/img/prod.jpg
new file mode 100644
index 00000000..3a215c9c
Binary files /dev/null and b/content/docs/latest/img/prod.jpg differ
diff --git a/content/docs/latest/img/prod.png b/content/docs/latest/img/prod.png
new file mode 100644
index 00000000..a55128b4
Binary files /dev/null and b/content/docs/latest/img/prod.png differ
diff --git a/content/docs/latest/img/quay-robot-screen.png b/content/docs/latest/img/quay-robot-screen.png
new file mode 100644
index 00000000..f1788bdc
Binary files /dev/null and b/content/docs/latest/img/quay-robot-screen.png differ
diff --git a/content/docs/latest/img/rimuhosting-coreos-image-select-cloud-config.png b/content/docs/latest/img/rimuhosting-coreos-image-select-cloud-config.png
new file mode 100644
index 00000000..90fed394
Binary files /dev/null and b/content/docs/latest/img/rimuhosting-coreos-image-select-cloud-config.png differ
diff --git a/content/docs/latest/img/settings.png b/content/docs/latest/img/settings.png
new file mode 100644
index 00000000..a7304412
Binary files /dev/null and b/content/docs/latest/img/settings.png differ
diff --git a/content/docs/latest/img/size.png b/content/docs/latest/img/size.png
new file mode 100644
index 00000000..4366036a
Binary files /dev/null and b/content/docs/latest/img/size.png differ
diff --git a/content/docs/latest/img/small.jpg b/content/docs/latest/img/small.jpg
new file mode 100644
index 00000000..a7c76ab2
Binary files /dev/null and b/content/docs/latest/img/small.jpg differ
diff --git a/content/docs/latest/img/small.png b/content/docs/latest/img/small.png
new file mode 100644
index 00000000..c64a883a
Binary files /dev/null and b/content/docs/latest/img/small.png differ
diff --git a/content/docs/latest/img/supply-chain-build.png b/content/docs/latest/img/supply-chain-build.png
new file mode 100644
index 00000000..26f50b6f
Binary files /dev/null and b/content/docs/latest/img/supply-chain-build.png differ
diff --git a/content/docs/latest/img/supply-chain-provision-runtime.png b/content/docs/latest/img/supply-chain-provision-runtime.png
new file mode 100644
index 00000000..78df45bd
Binary files /dev/null and b/content/docs/latest/img/supply-chain-provision-runtime.png differ
diff --git a/content/docs/latest/img/supply-chain-threats-slsa.png b/content/docs/latest/img/supply-chain-threats-slsa.png
new file mode 100644
index 00000000..757a1ad0
Binary files /dev/null and b/content/docs/latest/img/supply-chain-threats-slsa.png differ
diff --git a/content/docs/latest/img/template.png b/content/docs/latest/img/template.png
new file mode 100644
index 00000000..073427f7
Binary files /dev/null and b/content/docs/latest/img/template.png differ
diff --git a/content/docs/latest/img/update-timeline.png b/content/docs/latest/img/update-timeline.png
new file mode 100644
index 00000000..2e2ec35b
Binary files /dev/null and b/content/docs/latest/img/update-timeline.png differ
diff --git a/content/docs/latest/img/userdata.png b/content/docs/latest/img/userdata.png
new file mode 100644
index 00000000..a9ffb875
Binary files /dev/null and b/content/docs/latest/img/userdata.png differ
diff --git a/content/docs/latest/img/vmware-ip.png b/content/docs/latest/img/vmware-ip.png
new file mode 100644
index 00000000..1c279a39
Binary files /dev/null and b/content/docs/latest/img/vmware-ip.png differ
diff --git a/content/docs/latest/installing/_index.md b/content/docs/latest/installing/_index.md
new file mode 100644
index 00000000..62f3d525
--- /dev/null
+++ b/content/docs/latest/installing/_index.md
@@ -0,0 +1,216 @@
+---
+title: Getting Started with Flatcar Container Linux
+linktitle: Installing
+weight: 2
+aliases:
+ - os/quickstart
+ - quickstart
+---
+
+This guide aims to get you up and running in a few minutes.
+We'll cover
+- concepts, configuration, and provisioning
+- writing a basic Flatcar configuration and testing it locally with qemu
+- further reading - where to go from here after learning the basics
+
+### Concepts, Configuration, and Provisioning
+
+Flatcar Container Linux is configured at _provisioning time_.
+There are two configuration languages to set up Flatcar, aimed at different use cases:
+- [Butane Config][butane-configs] Butane is human-readable / writable YAML and must be converted (transpiled) into Ignition V3 config before Flatcar can use it. It's the successor of [Container Linux Config][cl-configs] which is also still supported (Butane is not supported for LTS-2022).
+- [Ignition config][ignition] is machine-readable JSON fed to Flatcar's ignition service.
+ Ignition is Flatcars "installation service" which configures a Flatcar instance during provisioning.
+ The config file is passed via the "custom data" or "user data" option of cloud providers, and can be supplied by various mechanisms to private cloud VMs and bare metal.
+ Ignition config is rarely written by a human; it's usually generated by provisioning automation or transpiled from user-written Butane.
+
+Use Butane to customise your Flatcar deployment, e.g. to
+- add custom users and groups
+- create and manage storage devices, file systems, and swap, and create custom files
+- customise automatic updates and define reboot windows
+- create custom network(d) configurations and systemd units
+ - Want to e.g. run a custom operation or start a service each time the instance boots?
+ Define a custom [systemd unit][systemd] via Butane.
+All above tasks only take a few lines of YAML and are covered in our [Butane examples][butane-examples].
+For a comprehensive discussion of all options available in Butane have a look at the [Butane specification][butane-spec].
+
+To convert Butane into machine-readable Ignition config, just download the latest release of [Butane][butane]. Alternatively, you can use the up-to-date [container image][container-image] with `docker run --rm -i quay.io/coreos/butane:latest`.
+
+Then, after converting Butane to Ignition config, pass the Ignition config along when provisioning your instance(s).
+The config is then passed via "user data" / "custom data" or similar means to the provisioning logic.
+
+#### Writing your first config and testing it locally in a qemu VM
+
+The way from a small [Butane Config YAML][butane-configs] or [Ignition JSON][ignition] file to a local [QEMU VM][qemu-docs] on your laptop is not far.
+Here we will create a systemd service that starts an NGINX container as example configuration for the VM.
+This is a good starting point for you to modify the Butane YAML file (or the Ignition JSON file) and test it by provisioning a temporary QEMU VM.
+This should work on most Linux systems and assumes you have an SSH key set up for ssh-agent.
+
+First download the Flatcar QEMU image and the helper script to start it with QEMU but don't run it yet.
+```shell
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh
+chmod +x flatcar_production_qemu.sh
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+bunzip2 flatcar_production_qemu_image.img.bz2
+```
+
+For Ignition configurations to be recognized we have to make sure that we always boot an unmodified fresh image because Ignition only runs on first boot.
+Therefore, before trying to use an Ignition config we will always discard the image modifications by using a fresh copy.
+You can already boot the image with `./flatcar_production_qemu.sh` and have a look around in the OS through the QEMU VGA console - you can close the QEMU window or stop the script with `Ctrl-C`.
+```shell
+mv flatcar_production_qemu_image.img flatcar_production_qemu_image.img.fresh
+# If you want to have a first look, boot it and wait for the autologin to give you a prompt:
+cp -i --reflink=auto flatcar_production_qemu_image.img.fresh flatcar_production_qemu_image.img
+```
+
+Now we will provision the VM on first boot through Ignition.
+Instead of writing the JSON config we use Butane YAML and transpile it.
+Save the following Butane YAML file as `cl.yaml` (or another name).
+It contains directives for setting up a systemd service that runs an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Before we can use it we have to transpile the Butane YAML to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+
+You can also skip this step and copy the resulting JSON file from here to `ignition.json` (or another name):
+
+```
+{
+ "ignition": {
+ "version": "3.3.0"
+ },
+ "systemd": {
+ "units": [
+ {
+ "contents": "[Unit]\nDescription=NGINX example\nAfter=docker.service\nRequires=docker.service\n[Service]\nTimeoutStartSec=0\nExecStartPre=-/usr/bin/docker rm --force nginx1\nExecStart=/usr/bin/docker run --name nginx1 --pull always --net host docker.io/nginx:1\nExecStop=/usr/bin/docker stop nginx1\nRestart=always\nRestartSec=5s\n[Install]\nWantedBy=multi-user.target\n",
+ "enabled": true,
+ "name": "nginx.service"
+ }
+ ]
+ }
+}
+```
+
+The final step is to boot the VM and make the Ignition configuration available to it.
+As said, the provisioning will only be done on first boot and if you want your (changed) Ignition configuration to be used, you have to boot from a fresh copy.
+You can repeat these combined steps as often as you want to test your Ignition changes.
+
+```shell
+# Make sure we boot a fresh copy:
+cp -i --reflink=auto flatcar_production_qemu_image.img.fresh flatcar_production_qemu_image.img
+./flatcar_production_qemu.sh -i ignition.json
+# Log in via SSH in a new terminal tab:
+ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 2222 core@127.0.0.1
+# Check that NGINX is running:
+systemctl status nginx
+curl http://localhost/
+```
+
+_NOTE_: For SSH access, you can also use the `~/.ssh/config` provided in the [QEMU][qemu-ssh] section then simply `ssh flatcar` or `scp my-file flatcar:/home/core` to send a file on the instance over SSH.
+
+If you have trouble SSHing into the VM, `./flatcar_production_qemu.sh` might have failed to auto-detect your ssh key.
+If that happens try with a user-supplied SSH key using the yaml snippet below.
+Alternatively, you can interact with the VM via the VGA console - the console has auto-login enabled and drops right into a shell.
+
+You can reboot and stop the VM if you like - when you start it later with a plain `./flatcar_production_qemu.sh` then our systemd unit will take care of starting NGINX on each boot.
+Note that the ignition config will only be processed on the very first boot - that's why we made a copy, so now we can restore our OS image from the pristine copy for successive experiments with Butane.
+
+As listed in the introduction above there are numerous options available for configuring Flatcar just the way you need it.
+For instance, you can specify a custom SSH key instead of your default one from your ssh-agent or from `~/.ssh/` in the Butane config, by adding this section to your YAML file:
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB......xyz email@host.net
+```
+
+Afterwards, transpile it again to Ignition JSON, overwrite `flatcar_production_qemu_image.img` with the fresh image file, and pass the ignition config to `./flatcar_production_qemu.sh` once again.
+
+
+### On automatic updates
+
+Flatcar has automatic updates enabled by default.
+Flatcar instances will download and stage (in the background) new OS versions as well as reboot into the updated OS when a new update becomes available.
+To change this default - for instance, to define reboot windows or even disable reboots - check out the [update strategies][update-strategies] doc.
+To disable downloading updates altogether either disable the `update-engine` service via a user-supplied systemd config, or use an invalid URL in the `SERVER` field of [`update.conf`][update-conf].
+
+
+### More on configuring and operating Flatcar Container Linux
+
+The documentation includes a whole section on configuration, operation, and maintenance.
+Have a look at the [setup guide][setup] for more information.
+
+### Further reading: Platform / vendor specific information
+
+Check out the guides on [running Flatcar Container Linux][running-container-linux] on most cloud providers:
+* [EC2][ec2-docs]
+* [Azure][azure-docs]
+* [GCE][gce-docs]
+* [Equinix Metal][equinix-metal-docs]
+virtualization platforms / private clouds:
+* [Vagrant][vagrant-docs]
+* [VMware][vmware-docs]
+* [VirtualBox][virtualbox-docs]
+* [QEMU/KVM][qemu-docs]/[libVirt][libvirt-docs]
+and bare metal servers:
+* [PXE][pxe-docs]
+* [iPXE][ipxe-docs]
+* [ISO][iso-docs]
+* [Installer][install-docs]
+
+With any of these guides you will have machines up and running in a few minutes.
+
+
+[update-strategies]: ../setup/releases/update-strategies
+[update-conf]: ../setup/releases/update-conf
+[setup]: ../setup
+[running-container-linux]: ../#installing-flatcar
+[ec2-docs]: cloud/aws-ec2
+[azure-docs]: cloud/azure
+[gce-docs]: cloud/gcp
+[vagrant-docs]: vms/vagrant
+[vmware-docs]: cloud/vmware
+[virtualbox-docs]: vms/virtualbox
+[qemu-docs]: vms/qemu
+[qemu-ssh]: vms/qemu#ssh-config
+[libvirt-docs]: vms/libvirt
+[equinix-metal-docs]: cloud/equinix-metal
+[pxe-docs]: bare-metal/booting-with-pxe
+[ipxe-docs]: bare-metal/booting-with-ipxe
+[iso-docs]: bare-metal/booting-with-iso
+[install-docs]: bare-metal/installing-to-disk
+[ignition]: ../provisioning/ignition/
+[cl-configs]: ../provisioning/cl-config
+[butane-configs]: ../provisioning/config-transpiler
+[butane-examples]: ../provisioning/config-transpiler/examples
+[butane-spec]: ../provisioning/config-transpiler/configuration
+[systemd]: ../setup/systemd/getting-started
+[container-image]: https://quay.io/repository/coreos/butane
+[butane]: https://github.com/coreos/butane/releases
diff --git a/content/docs/latest/installing/bare-metal/_index.md b/content/docs/latest/installing/bare-metal/_index.md
new file mode 100644
index 00000000..ef6271d0
--- /dev/null
+++ b/content/docs/latest/installing/bare-metal/_index.md
@@ -0,0 +1,7 @@
+---
+title: Bare Metal
+weight: 30
+description: This section provides information and guidance on running Flatcar instances in bare-metal environments.
+aliases:
+ - ../bare-metal
+---
diff --git a/content/docs/latest/installing/bare-metal/booting-with-ipxe.md b/content/docs/latest/installing/bare-metal/booting-with-ipxe.md
new file mode 100644
index 00000000..5884cff6
--- /dev/null
+++ b/content/docs/latest/installing/bare-metal/booting-with-ipxe.md
@@ -0,0 +1,177 @@
+---
+title: Booting Flatcar Container Linux via iPXE
+linktitle: Booting via iPXE
+weight: 10
+aliases:
+ - ../../os/booting-with-ipxe
+ - ../../bare-metal/booting-with-ipxe
+---
+
+These instructions will walk you through booting Flatcar Container Linux via iPXE on real or virtual hardware. By default, this will run Flatcar Container Linux completely out of RAM. Flatcar Container Linux can also be [installed to disk][installing-to-disk].
+
+A minimum of 3 GB of RAM is required to boot Flatcar Container Linux via PXE.
+
+## Configuring iPXE
+
+iPXE can be used on any platform that can boot an ISO image.
+This includes many cloud providers and physical hardware.
+
+To illustrate iPXE in action we will use qemu-kvm in this guide.
+
+### Setting up iPXE boot script
+
+When configuring the Flatcar Container Linux iPXE boot script there are a few kernel options that may be useful but all are optional.
+
+- **rootfstype=tmpfs**: Use tmpfs for the writable root filesystem. This is the default behavior.
+- **rootfstype=btrfs**: Use btrfs in RAM for the writable root filesystem. The filesystem will consume more RAM as it grows, up to a max of 50%. The limit isn't currently configurable.
+- **root**: Use a local filesystem for root instead of one of two in-ram options above. The filesystem must be formatted (perhaps using Ignition) but may be completely blank; it will be initialized on boot. The filesystem may be specified by any of the usual ways including device, label, or UUID; e.g: `root=/dev/sda1`, `root=LABEL=ROOT` or `root=UUID=2c618316-d17a-4688-b43b-aa19d97ea821`.
+- **sshkey**: Add the given SSH public key to the `core` user's `authorized_keys` file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
+- **console**: Enable kernel output and a login prompt on a given tty. The default, `tty0`, generally maps to VGA. Can be used multiple times, e.g. `console=tty0 console=ttyS0`
+- **flatcar.autologin**: Drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option, e.g. `console=tty0 console=ttyS0 flatcar.autologin=tty1 flatcar.autologin=ttyS0`. Without any argument it enables access on all consoles. Note that for the VGA console the login prompts are on virtual terminals (`tty1`, `tty2`, etc), not the VGA console itself (`tty0`).
+- **flatcar.first_boot=1**: Download an Ignition config and use it to provision your booted system. Ignition configs are generated from Container Linux Configs. See the [config transpiler documentation][cl-configs] for more information. If a local filesystem is used for the root partition, pass this parameter only on the first boot.
+- **ignition.config.url**: Download the Ignition config from the specified URL. `http`, `https`, `s3`, and `tftp` schemes are supported.
+- **ip**: Configure temporary static networking for initramfs. This parameter does not influence the final network configuration of the node and is mostly useful for first-boot provisioning of systems in DHCP-less environments. See [Ignition documentation][ignition-kargs-ip] for the complete syntax.
+
+### Choose a Channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+### Setting up the Boot Script
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server .
+
+#!ipxe
+
+set base-url http://alpha.release.flatcar-linux.net/amd64-usr/current
+kernel ${base-url}/flatcar_production_pxe.vmlinuz initrd=flatcar_production_pxe_image.cpio.gz flatcar.first_boot=1 ignition.config.url=https://example.com/pxe-config.ign
+initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
+boot
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server .
+
+#!ipxe
+
+set base-url http://beta.release.flatcar-linux.net/amd64-usr/current
+kernel ${base-url}/flatcar_production_pxe.vmlinuz initrd=flatcar_production_pxe_image.cpio.gz flatcar.first_boot=1 ignition.config.url=https://example.com/pxe-config.ign
+initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
+boot
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
iPXE downloads a boot script from a publicly available URL. You will need to host this URL somewhere public and replace the example SSH key with your own. You can also run a custom iPXE server .
+
+#!ipxe
+
+set base-url http://stable.release.flatcar-linux.net/amd64-usr/current
+kernel ${base-url}/flatcar_production_pxe.vmlinuz initrd=flatcar_production_pxe_image.cpio.gz flatcar.first_boot=1 ignition.config.url=https://example.com/pxe-config.ign
+initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
+boot
+
+
+
+
+An easy place to host this boot script is on [http://pastie.org](http://pastie.org). Be sure to reference the "raw" version of script, which is accessed by clicking on the clipboard in the top right.
+
+## Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][butane-configs].
+
+You can provide a raw Ignition JSON config to Flatcar Container Linux via the `ignition.config.url` specified above.
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+### Booting iPXE
+
+First, download and boot the iPXE image.
+We will use `qemu-kvm` in this guide but use whatever process you normally use for booting an ISO on your platform.
+
+```shell
+wget http://boot.ipxe.org/ipxe.iso
+qemu-kvm -m 1024 ipxe.iso -display curses
+```
+
+Next press Ctrl+B to get to the iPXE prompt and type in the following commands:
+
+```shell
+iPXE> dhcp
+iPXE> chain http://${YOUR_BOOT_URL}
+```
+
+Immediately iPXE should download your boot script URL and start grabbing the images from the Flatcar Container Linux storage site:
+
+```shell
+${YOUR_BOOT_URL}... ok
+http://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz... 98%
+```
+
+After a few moments of downloading Flatcar Container Linux should boot normally.
+
+## Update process
+
+Since Flatcar Container Linux's upgrade process requires a disk, this image does not have the option to update itself. Instead, the box simply needs to be rebooted and will be running the latest version, assuming that the image served by the PXE server is regularly updated.
+
+## Installation
+
+Flatcar Container Linux can be completely installed on disk or run from RAM but store user data on disk. Read more in our [Installing Flatcar Container Linux guide][pxe-installation].
+
+## Adding a custom OEM
+
+Similar to the [OEM partition][oem] in Flatcar Container Linux disk images, iPXE images can be customized with an [Ignition config][ignition] bundled in the initramfs. You can view the [instructions on the PXE docs][pxe-custom-oem].
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[cl-configs]: ../../provisioning/cl-config
+[butane-configs]: ../../provisioning/config-transpiler
+[ignition]: ../../provisioning/ignition
+[ignition-kargs-ip]: ../../provisioning/ignition/network-configuration/#using-static-ip-addresses-with-ignition
+[oem]: ../community-platforms/notes-for-distributors#image-customization
+[installing-to-disk]: installing-to-disk
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[pxe-installation]: booting-with-pxe#installation
+[pxe-custom-oem]: booting-with-pxe#adding-a-custom-oem
+[quickstart]: ../
+[doc-index]: ../../
+
+
diff --git a/content/docs/latest/installing/bare-metal/booting-with-iso.md b/content/docs/latest/installing/bare-metal/booting-with-iso.md
new file mode 100644
index 00000000..b1b0abea
--- /dev/null
+++ b/content/docs/latest/installing/bare-metal/booting-with-iso.md
@@ -0,0 +1,64 @@
+---
+title: Booting Flatcar Container Linux from an ISO
+linktitle: Booting from an ISO
+weight: 10
+aliases:
+ - ../../os/booting-with-iso
+ - ../../bare-metal/booting-with-iso
+---
+
+The latest Flatcar Container Linux ISOs can be downloaded from the image storage site:
+
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+
Download Alpha ISO
+
Browse Storage Site
+
+
All of the files necessary to verify the image can be found on the storage site.
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+
Download Beta ISO
+
Browse Storage Site
+
+
All of the files necessary to verify the image can be found on the storage site.
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+
Download Stable ISO
+
Browse Storage Site
+
+
All of the files necessary to verify the image can be found on the storage site.
+
+
+
+
+## Known limitations
+
+1. UEFI boot is not currently supported. Boot the system in BIOS compatibility mode.
+2. There is no straightforward way to provide an [Ignition config][cl-configs].
+3. A minimum of 2 GB of RAM is required to boot Flatcar Container Linux via ISO.
+
+## Install to disk
+
+The most common use-case for this ISO is to install Flatcar Container Linux to disk. You can [find those instructions here][installing-to-disk].
+
+## No authentication on console
+
+The ISO is configured to start a shell on the console without prompting for a password. This is convenient for installation and troubleshooting, but use caution.
+
+[cl-configs]: ../../provisioning/cl-config
+[installing-to-disk]: installing-to-disk
diff --git a/content/docs/latest/installing/bare-metal/booting-with-pxe.md b/content/docs/latest/installing/bare-metal/booting-with-pxe.md
new file mode 100644
index 00000000..62f95ffd
--- /dev/null
+++ b/content/docs/latest/installing/bare-metal/booting-with-pxe.md
@@ -0,0 +1,261 @@
+---
+title: Booting Flatcar Container Linux via PXE
+linktitle: Booting via PXE
+weight: 10
+aliases:
+ - ../../os/booting-with-pxe
+ - ../../bare-metal/booting-with-pxe
+---
+
+These instructions will walk you through booting Flatcar Container Linux via PXE on real or virtual hardware. By default, this will run Flatcar Container Linux completely out of RAM. Flatcar Container Linux can also be [installed to disk][installing-to-disk].
+
+A minimum of 3 GB of RAM is required to boot Flatcar Container Linux via PXE.
+
+## Configuring pxelinux
+
+This guide assumes you already have a working PXE server using [pxelinux][pxelinux]. If you need suggestions on how to set a server up, check out guides for [Debian][debian-pxe], [Fedora][fedora-pxe] or [Ubuntu][ubuntu-pxe].
+
+[debian-pxe]: https://wiki.debian.org/PXEBootInstall
+[ubuntu-pxe]: https://help.ubuntu.com/community/DisklessUbuntuHowto
+[fedora-pxe]: http://docs.fedoraproject.org/en-US/Fedora/7/html/Installation_Guide/ap-pxe-server.html
+[pxelinux]: http://www.syslinux.org/wiki/index.php/PXELINUX
+
+### Setting up pxelinux.cfg
+
+When configuring the Flatcar Container Linux pxelinux.cfg there are a few kernel options that may be useful but all are optional.
+
+- **rootfstype=tmpfs**: Use tmpfs for the writable root filesystem. This is the default behavior.
+- **rootfstype=btrfs**: Use btrfs in RAM for the writable root filesystem. The filesystem will consume more RAM as it grows, up to a max of 50%. The limit isn't currently configurable.
+- **root**: Use a local filesystem for root instead of one of two in-ram options above. The filesystem must be formatted (perhaps using Ignition) but may be completely blank; it will be initialized on boot. The filesystem may be specified by any of the usual ways including device, label, or UUID; e.g: `root=/dev/sda1`, `root=LABEL=ROOT` or `root=UUID=2c618316-d17a-4688-b43b-aa19d97ea821`.
+- **sshkey**: Add the given SSH public key to the `core` user's authorized_keys file. Replace the example key below with your own (it is usually in `~/.ssh/id_rsa.pub`)
+- **console**: Enable kernel output and a login prompt on a given tty. The default, `tty0`, generally maps to VGA. Can be used multiple times, e.g. `console=tty0 console=ttyS0`
+- **flatcar.autologin**: Drop directly to a shell on a given console without prompting for a password. Useful for troubleshooting but use with caution. For any console that doesn't normally get a login prompt by default be sure to combine with the `console` option, e.g. `console=tty0 console=ttyS0 flatcar.autologin=tty1 flatcar.autologin=ttyS0`. Without any argument it enables access on all consoles. Note that for the VGA console the login prompts are on virtual terminals (`tty1`, `tty2`, etc), not the VGA console itself (`tty0`).
+- **flatcar.first_boot=1**: Download an Ignition config and use it to provision your booted system. Ignition configs are generated from Butane Configs. See the [Butane Config documentation][butane-configs] for more information. If a local filesystem is used for the root partition, pass this parameter only on the first boot.
+- **ignition.config.url**: Download the Ignition config from the specified URL. `http`, `https`, `s3`, and `tftp` schemes are supported.
+- **ip**: Configure temporary static networking for initramfs. This parameter does not influence the final network configuration of the node and is mostly useful for first-boot provisioning of systems in DHCP-less environments. See [Ignition documentation][ignition-kargs-ip] for the complete syntax.
+
+This is an example pxelinux.cfg file that assumes Flatcar Container Linux is the only option. You should be able to copy this verbatim into `/var/lib/tftpboot/pxelinux.cfg/default` after providing an Ignition config URL:
+
+```shell
+default flatcar
+prompt 1
+timeout 15
+
+display boot.msg
+
+label flatcar
+ menu default
+ kernel flatcar_production_pxe.vmlinuz
+ initrd flatcar_production_pxe_image.cpio.gz
+ append flatcar.first_boot=1 ignition.config.url=https://example.com/pxe-config.ign
+```
+
+Here's a Butane YAML example that starts and NGINX Docker container. It should be transpiled to Ignition JSON and located at the URL from above:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq...
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+
+
+### Choose a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+PXE booted machines cannot currently update themselves when new versions are released to a channel. To update to the latest version of Flatcar Container Linux download/verify these files again and reboot.
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
In the config above you can see that a Kernel image and a initramfs file is needed. Download these two files into your tftp root.
+
The flatcar_production_pxe.vmlinuz.sig
and flatcar_production_pxe_image.cpio.gz.sig
files can be used to verify the downloaded files .
+
+cd /var/lib/tftpboot
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz.sig
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe_image.cpio.gz
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe_image.cpio.gz.sig
+gpg --verify flatcar_production_pxe.vmlinuz.sig
+gpg --verify flatcar_production_pxe_image.cpio.gz.sig
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
In the config above you can see that a Kernel image and a initramfs file is needed. Download these two files into your tftp root.
+
The flatcar_production_pxe.vmlinuz.sig
and flatcar_production_pxe_image.cpio.gz.sig
files can be used to verify the downloaded files .
+
+cd /var/lib/tftpboot
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz.sig
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe_image.cpio.gz
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe_image.cpio.gz.sig
+gpg --verify flatcar_production_pxe.vmlinuz.sig
+gpg --verify flatcar_production_pxe_image.cpio.gz.sig
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
In the config above you can see that a Kernel image and a initramfs file is needed. Download these two files into your tftp root.
+
The flatcar_production_pxe.vmlinuz.sig
and flatcar_production_pxe_image.cpio.gz.sig
files can be used to verify the downloaded files .
+
+cd /var/lib/tftpboot
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe.vmlinuz.sig
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe_image.cpio.gz
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_pxe_image.cpio.gz.sig
+gpg --verify flatcar_production_pxe.vmlinuz.sig
+gpg --verify flatcar_production_pxe_image.cpio.gz.sig
+
+
+
+
+
+## Booting the box
+
+After setting up the PXE server as outlined above you can start the target machine in PXE boot mode. The machine should grab the image from the server and boot into Flatcar Container Linux. If something goes wrong you can direct questions to the [IRC channel][irc] or [mailing list][flatcar-user].
+
+```shell
+This is localhost.unknown_domain (Linux x86_64 3.10.10+) 19:53:36
+SSH host key: 24:2e:f1:3f:5f:9c:63:e5:8c:17:47:32:f4:09:5d:78 (RSA)
+SSH host key: ed:84:4d:05:e3:7d:e3:d0:b9:58:90:58:3b:99:3a:4c (DSA)
+ens0: 10.0.2.15 fe80::5054:ff:fe12:3456
+localhost login:
+```
+
+## Logging in
+
+The IP address for the machine should be printed out to the terminal for convenience. If it doesn't show up immediately, press enter a few times and it should show up. Now you can simply SSH in using public key authentication:
+
+```shell
+ssh core@10.0.2.15
+```
+
+## Update Process
+
+Since our upgrade process requires a disk, this image does not have the option to update itself. Instead, the box simply needs to be rebooted and will be running the latest version, assuming that the image served by the PXE server is regularly updated.
+
+## Installation
+
+Once booted it is possible to [install Flatcar Container Linux on a local disk][installing-to-disk] or to just use local storage for the writable root filesystem while continuing to boot Flatcar Container Linux itself via PXE.
+
+If you plan on using Docker we recommend using a local ext4 filesystem with overlayfs, however, btrfs is also available to use if needed.
+
+For example, to setup an ext4 root filesystem on `/dev/sda`:
+
+```yaml
+storage:
+ disks:
+ - device: /dev/sda
+ wipe_table: true
+ partitions:
+ - label: ROOT
+ filesystems:
+ - mount:
+ device: /dev/disk/by-partlabel/ROOT
+ format: ext4
+ wipe_filesystem: true
+ label: ROOT
+```
+
+And add `root=/dev/sda1` or `root=LABEL=ROOT` to the kernel options as documented above.
+
+Similarly, to setup a btrfs root filesystem on `/dev/sda`:
+
+```yaml
+storage:
+ disks:
+ - device: /dev/sda
+ wipe_table: true
+ partitions:
+ - label: ROOT
+ filesystems:
+ - mount:
+ device: /dev/disk/by-partlabel/ROOT
+ format: btrfs
+ wipe_filesystem: true
+ label: ROOT
+```
+
+## Adding a Custom OEM
+
+Similar to the [OEM partition][oem] in Flatcar Container Linux disk images, PXE images can be customized with an [Ignition config][ignition] bundled in the initramfs. Simply create a `./usr/share/oem/` directory, add a `config.ign` file containing the Ignition config, and add the directory tree as an additional initramfs:
+
+```shell
+mkdir -p usr/share/oem
+cp example.ign ./usr/share/oem/config.ign
+find usr | cpio -o -H newc -O oem.cpio
+gzip oem.cpio
+```
+
+Confirm the archive looks correct and has your config inside of it:
+
+```shell
+gzip --stdout --decompress oem.cpio.gz | cpio -it
+./
+usr
+usr/share
+usr/share/oem
+usr/share/oem/config.ign
+```
+
+Add the `oem.cpio.gz` file to your PXE boot directory, then [append it][append-initrd] to the `initrd` line in your `pxelinux.cfg`:
+
+```text
+...
+initrd flatcar_production_pxe_image.cpio.gz,oem.cpio.gz
+kernel flatcar_production_pxe.vmlinuz flatcar.first_boot=1
+...
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[append-initrd]: http://www.syslinux.org/wiki/index.php?title=SYSLINUX#INITRD_initrd_file
+[flatcar-user]: https://groups.google.com/forum/#!forum/flatcar-linux-user
+[irc]: irc://irc.freenode.org:6667/#flatcar
+[butane-configs]: ../../provisioning/config-transpiler
+[ignition]: ../../provisioning/ignition
+[ignition-kargs-ip]: ../../provisioning/ignition/network-configuration/#using-static-ip-addresses-with-ignition
+[oem]: ../community-platforms/notes-for-distributors#image-customization
+[installing-to-disk]: installing-to-disk
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+
diff --git a/content/docs/latest/installing/bare-metal/installing-to-disk.md b/content/docs/latest/installing/bare-metal/installing-to-disk.md
new file mode 100644
index 00000000..d7d9d28f
--- /dev/null
+++ b/content/docs/latest/installing/bare-metal/installing-to-disk.md
@@ -0,0 +1,180 @@
+---
+title: Installing Flatcar Container Linux to disk
+linktitle: Using flatcar-install script
+description: >
+ How to use the flatcar-install script to install Flatcar from
+ a running system.
+weight: 10
+aliases:
+ - ../../os/installing-to-disk
+ - ../../bare-metal/installing-to-disk
+---
+## Required Dependencies
+If you want to use the `flatcar-install` script on some other environment than Flatcar Container Linux, ensure that the following binaries are present:
+```
+bash
+lbzip2 or bzip2
+mount, lsblk (often found in the util-linux packaage)
+wget
+grep
+cp, dd, mkfifo, mkdir, rm, tee (often found in the GNU coreutils package or as part of busybox)
+udevadm (found in systemd-udev package, or for Alpine images in eudev)
+gpg, gpg2 (found in gnupg2)
+gawk (often found in GNU gawk package)
+```
+
+
+## Install script
+
+There is a simple installer that will destroy everything on the given target disk and install Flatcar Container Linux. Essentially it downloads an image, verifies it with gpg, and then copies it bit for bit to disk. An installation requires at least 8 GB of usable space on the device.
+
+The script is self-contained and located [on GitHub here][flatcar-install] and can be run from any Linux distribution. You cannot normally install Flatcar Container Linux to the same device that is currently booted. However, the [Flatcar Container Linux ISO][flatcar-iso] or any Linux liveCD will allow Flatcar Container Linux to install to a non-active device.
+
+If you boot Flatcar Container Linux via PXE, the install script is already installed. By default the install script will attempt to install the same version and channel that was PXE-booted:
+
+```shell
+flatcar-install -d /dev/sda -i ignition.json
+```
+
+`ignition.json` should include user information (especially an SSH key) generated from a [Butane Config][butane-section], or you will not be able to log into your Flatcar Container Linux instance.
+
+If you are installing on VMware, pass `-o vmware_raw` to install the VMware-specific image:
+
+```shell
+flatcar-install -d /dev/sda -i ignition.json -o vmware_raw
+```
+
+## Choose a channel
+
+Flatcar Container Linux is designed to be [updated automatically][update-strategies] with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
If you want to ensure you are installing the latest alpha version, use the -C
option:
+
flatcar-install -d /dev/sda -C alpha
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
If you want to ensure you are installing the latest beta version, use the -C
option:
+
flatcar-install -d /dev/sda -C beta
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
If you want to ensure you are installing the latest stable version, use the -C
option:
+
flatcar-install -d /dev/sda -C stable
+
+
+
+
+For reference here are the rest of the `flatcar-install` options:
+
+```shell
+-d DEVICE Install Flatcar Container Linux to the given device.
+-s EXPERIMENTAL: Install Flatcar Container Linux to the smallest unmounted disk found
+ (min. size 10GB). It is recommended to use it with -e or -I to filter the
+ block devices by their major numbers. E.g., -e 7 to exclude loop devices
+ or -I 8,259 for certain disk types. Read more about the numbers here:
+ https://www.kernel.org/doc/Documentation/admin-guide/devices.txt.
+-V VERSION Version to install (e.g. current, or current-2022 for the LTS 2022 stream)
+-B BOARD Flatcar Container Linux board to use
+-C CHANNEL Release channel to use (e.g. beta)
+-I|e EXPERIMENTAL (used with -s): List of major device numbers to in-/exclude
+ when finding the smallest disk.
+-o OEM OEM type to install (e.g. ami), using flatcar_production__image.bin.bz2
+-c CLOUD Insert a cloud-init config to be executed on boot.
+-i IGNITION Insert an Ignition config to be executed on boot.
+-b BASEURL URL to the image mirror (overrides BOARD and CHANNEL)
+-k KEYFILE Override default GPG key for verifying image signature
+-f IMAGE Install unverified local image file to disk instead of fetching
+-n Copy generated network units to the root partition.
+-v Super verbose, for debugging.
+```
+
+## Butane Configs
+
+By default there isn't a password or any other way to log into a fresh Flatcar Container Linux system. The easiest way to configure accounts, add systemd units, and more is via Butane Configs. Jump over to the [docs to learn about the supported features][butane].
+
+After using the [Butane][butane] to produce an Ignition config, the installation script will process your `ignition.json` file specified with the `-i` flag and use it when the installation is booted.
+
+A Butane Config YAML that specifies an SSH key for the `core` user but doesn't use any other parameters looks like:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+
+To start the installation script with a reference to our Ignition config, run:
+
+```shell
+flatcar-install -d /dev/sda -C stable -i ~/ignition.json
+```
+
+### Advanced Butane Config example
+
+This Butane YAML example will configure Flatcar Container Linux to run an NGINX Docker container.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][docs-root].
+
+[quickstart]: ../
+[docs-root]: ../../
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[flatcar-iso]: booting-with-iso
+[butane-section]: #butane-configs
+[flatcar-install]: https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install
+[cl-configs]: ../../provisioning/cl-config
+[butane]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/installing/bare-metal/raspberry-pi.md b/content/docs/latest/installing/bare-metal/raspberry-pi.md
new file mode 100644
index 00000000..504a92f2
--- /dev/null
+++ b/content/docs/latest/installing/bare-metal/raspberry-pi.md
@@ -0,0 +1,215 @@
+---
+title: Running Flatcar Container Linux on Raspberry Pi 4
+linktitle: Running on Raspberry Pi 4
+weight: 10
+---
+### Hardware Requirements
+
+- A Raspberry Pi 4
+- Form of storage, either USB and/or SD card. USB 3.0 drive is recommended for the better performance in terms of price.
+- Display (via micro HDMI/HDMI/Serial Cables)
+- Keyboard
+
+---
+
+
+### Before we start
+**A word of warning**:
+
+- The UEFI firmware used in this guide is an [_UNOFFICIAL_ firmware](https://rpi4-uefi.dev/faq/#Is_this_an_official_Raspberry_Pi_Foundation_project), provided under an open source BSD license.
+- Flatcar Container Linux support for Raspberry Pi is still in its early stages and is not thoroughly tested.
+- Deploy Flatcar Container Linux on the hardware for purely fun and learning.
+- Please follow the documentation at your own risk.
+
+---
+
+### Update the EEPROM
+The Raspberry PI 4 uses an EEPROM to boot the system. Before proceeding ahead, it is recommended to update the EEPROM. Raspberry Pi OS automatically updates the bootloader on system boot. In case you are using Raspberry Pi OS already, then the bootloader may be already updated.
+
+For manually updating the EEPROM, you can either use the Raspberry Pi Imager or the raspi-config. The former is the recommended method in the [Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/raspberry-pi.html#raspberry-pi-4-boot-eeprom).
+
+We will also see later how the RPi4 UEFI firmware needs a recent version of EEPROM.
+
+#### Using the Raspberry Pi Imager (Recommended)
+
+- Install the [Raspberry Pi Imager](https://www.raspberrypi.com/software/) software. You can also look for the software in your distribution repository.
+- Launch `Raspberry Pi Imager`.
+- Select `Misc utility images` under `Operating System`.
+- Select `Bootloader`.
+- Select the boot-mode, `SD`, `USB`
+- Select the appropriate storage, `SD` or `USB`
+- Boot the Raspberry Pi with the new image and wait for at least 10 seconds.
+- The green activity LED will blink with a steady pattern and the HDMI display will be green on success.
+- Power off the Raspberry Pi and disconnect the storage.
+
+##### Using the raspi-config
+
+- Update the `rpi-eeprom` package in the Raspberry Pi OS running.
+```bash
+sudo apt update
+sudo apt full-upgrade
+sudo apt install rpi-eeprom
+```
+- Run `sudo raspi-config`
+- Select `Advanced Options`.
+- Select `Bootloader Version`
+- Select `Latest` for latest Stable Bootloader release.
+- Reboot
+
+##### Using the rpi-eeprom-update
+
+- Update the `rpi-eeprom` package in the Raspberry Pi OS running.
+```bash
+sudo apt update
+sudo apt full-upgrade
+sudo apt install rpi-eeprom
+```
+
+- Check if there are available updates.
+```bash
+sudo rpi-eeprom-update
+```
+
+- Install the update
+```bash
+# The update is pulled from the `default` release channel.
+# The other available channels are: latest and beta
+# You can update the channel by updating the value of
+# `FIRMWARE_RELEASE_STATUS` in the `/etc/default/rpi-eeprom-update`
+# file. This is useful usually in case when you want
+# features yet to be made available on the default channel.
+
+# Install the update
+sudo rpi-eeprom-update -a
+
+# A reboot is needed to apply the update
+# To cancel the update, you can use: sudo rpi-eeprom-update -r
+sudo reboot
+```
+
+### Installing Flatcar
+
+##### Install `flatcar-install` script
+
+Flatcar provides a simple installer script that helps install Flatcar Container Linux on the target disk. The script is available on [Github](https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install), and the first step would be to install the script in the host system.
+
+```bash
+mkdir -p ~/.local/bin
+# You may also add `PATH` export to your shell profile, i.e bashrc, zshrc etc.
+export PATH=$PATH:$HOME/.local/bin
+
+curl -LO https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install
+chmod +x flatcar-install
+mv flatcar-install ~/.local/bin
+```
+
+##### Install Flatcar on the target device
+
+Now that the `flatcar-install` script is installed in the host machine, go ahead and install the Flatcar Container Linux image on the target device.
+The target device could be a USB drive or SD Card.
+
+The options that we will be using with the scripts are:
+```bash
+# -d DEVICE Install Flatcar Container Linux to the given device.
+# -C CHANNEL Release channel to use
+# -B BOARD Flatcar Container Linux Board to use
+# -o OEM OEM type to install (e.g. ami), using flatcar_production__image.bin.bz2
+# -i IGNITION Insert an Ignition config to be executed on boot.
+```
+
+- The device would be the target device that you would like to use. You can use the `lsblk` command to find the appropriate disk. For the example we would be using `/dev/sda`.
+- With the given values of `channel` and `board`, the script would download the image, verify it with gpg, and then copy it bit for bit to disk.
+- In our case, Flatcar does not yet ship Raspberry PI specific OEM images yet so the value will be an empty string `''`.
+- Pass the Ignition file, `config.json` in my case, to provision the Pi during boot.
+```json
+{
+ "ignition": {
+ "config": {},
+ "security": {
+ "tls": {}
+ },
+ "timeouts": {},
+ "version": "2.3.0"
+ },
+ "networkd": {},
+ "passwd": {
+ "users": [
+ {
+ "name": "core",
+ "sshAuthorizedKeys": [
+
+ ]
+ }
+ ]
+ },
+ "storage": {
+ "files": [
+ {
+ "filesystem": "OEM",
+ "path": "/grub.cfg",
+ "append": true,
+ "contents": {
+ "source": "data:,set%20linux_console%3D%22console%3DttyAMA0%2C115200n8%20console%3Dtty1%22%0Aset%20linux_append%3D%22flatcar.autologin%20usbcore.autosuspend%3D-1%22%0A",
+ "verification": {}
+ },
+ "mode": 420
+ }
+ ],
+ "filesystems": [
+ {
+ "mount": {
+ "device": "/dev/disk/by-label/OEM",
+ "format": "btrfs"
+ },
+ "name": "OEM"
+ }
+ ]
+ },
+ "systemd": {}
+}
+```
+
+Go ahead with the write on the target device
+```
+sudo flatcar-install -d /dev/sda -C stable -B arm64-usr -o '' -i config.json
+```
+
+If you already have the image downloaded you can use the `-f` param to specify the path of the local image file.
+```
+sudo flatcar-install -d /dev/sda -C stable -B arm64-usr -o '' -i config.json -f flatcar_production_image.bin.bz2
+```
+
+##### Raspberry Pi 4 UEFI Firmware
+
+[rpi-uefi community](https://rpi4-uefi.dev) ships a SBBR-compliant(UEFI+ACPI), ArmServerReady ARM64 firmware for Raspberry Pi 4. We will be using it to UEFI boot Flatcar.
+
+`v1.17` of the [pftf/RPi4](https://github.com/pftf/RPi4/releases/tag/v1.17) introduced two major changes:
+- Firstly, it enabled firmware boot directly from the USB. This is particularly helpful if you are using the installation process using a USB device.
+- Secondly, support for directly placing the Pi boot files into the EFI System Partition (ESP). This feature was not implemented in the firmware, rather from the upstream firmware from Raspberry Pi Foundation. This is why it is recommended to update the Pi EEPROM at the very beginning.
+
+Let's move ahead with the final steps.
+
+- Place the UEFI firmware into the EFI System Partition.
+
+```bash
+# Note `/dev/sda` mentioned in the example needs to be the USB drive that
+# we installed flatcar onto
+efipartition=$(lsblk /dev/sda -oLABEL,PATH | awk '$1 == "EFI-SYSTEM" {print $2}')
+mkdir /tmp/efipartition
+sudo mount ${efipartition} /tmp/efipartition
+pushd /tmp/efipartition
+version=$(curl --silent "https://api.github.com/repos/pftf/RPi4/releases/latest" | jq -r .tag_name)
+sudo curl -LO https://github.com/pftf/RPi4/releases/download/${version}/RPi4_UEFI_Firmware_${version}.zip
+sudo unzip RPi4_UEFI_Firmware_${version}.zip
+sudo rm RPi4_UEFI_Firmware_${version}.zip
+popd
+sudo umount /tmp/efipartition
+```
+- Remove the `USB`/`SD` from the host device and attach it into the Raspberry Pi 4 and boot.
+
+In no time, your Raspberry Pi would boot and present you with a Flatcar Container Linux prompt.
+
+
+### Further Reading
+- [rpi4-uefi.dev](https://rpi4-uefi.dev/) - RPi4 UEFI Firmware Official Website
+- [Raspberry Pi](https://www.raspberrypi.com/documentation/computers/raspberry-pi.html#raspberry-pi-4-boot-eeprom) documentation
diff --git a/content/docs/latest/installing/cloud/_index.md b/content/docs/latest/installing/cloud/_index.md
new file mode 100644
index 00000000..47b80839
--- /dev/null
+++ b/content/docs/latest/installing/cloud/_index.md
@@ -0,0 +1,9 @@
+---
+title: Cloud Providers
+weight: 20
+description: >
+ This section provides information and guidance on running Flatcar
+ instances in different cloud environments.
+aliases:
+ - ../cloud-providers
+---
diff --git a/content/docs/latest/installing/cloud/aws-ec2.md b/content/docs/latest/installing/cloud/aws-ec2.md
new file mode 100644
index 00000000..7e4633ab
--- /dev/null
+++ b/content/docs/latest/installing/cloud/aws-ec2.md
@@ -0,0 +1,497 @@
+---
+title: Running Flatcar Container Linux on AWS EC2
+linktitle: Running on AWS EC2
+weight: 10
+aliases:
+ - ../../os/booting-on-ec2
+ - ../../cloud-providers/booting-on-ec2
+---
+
+The current AMIs for all Flatcar Container Linux channels and EC2 regions are listed below and updated frequently. Using CloudFormation is the easiest way to launch a cluster, but it is also possible to follow the manual steps at the end of the article. Questions can be directed to the Flatcar Container Linux [IRC channel][irc] or [user mailing list][flatcar-user].
+
+At the end of the document there are instructions for deploying with Terraform.
+
+## Release retention time
+
+After publishing, releases will remain available as public AMIs on AWS for 9 months. AMIs older than 9 months will be un-published in regular garbage collection sweeps. Please note that this will not impact existing AWS instances that use those releases. However, deploying new instances (e.g. in autoscaling groups pinned to a specific AMI) will not be possible after the AMI was un-published.
+
+## Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+ View as json feed: {{< docs_amis_feed "alpha" >}}
+
+ {{< docs_amis_table "alpha" >}}
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+ View as json feed: {{< docs_amis_feed "beta" >}}
+
+ {{< docs_amis_table "beta" >}}
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+ View as json feed: {{< docs_amis_feed "stable" >}}
+
+ {{< docs_amis_table "stable" >}}
+
+
+
+
+
+## Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][butane-configs].
+
+You can provide a raw Ignition JSON config to Flatcar Container Linux via the Amazon web console or [via the EC2 API][ec2-user-data].
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+
+### Instance storage
+
+Ephemeral disks and additional EBS volumes attached to instances can be mounted with a `.mount` unit. Amazon's block storage devices are attached differently [depending on the instance type](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#InstanceStoreDeviceNames). Here's the Butane Config to format and mount the first ephemeral disk, `xvdb`, on most instance types:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: /dev/xvdb
+ format: ext4
+ wipe_filesystem: true
+ label: ephemeral
+systemd:
+ units:
+ - name: media-ephemeral.mount
+ enabled: true
+ contents: |
+ [Mount]
+ What=/dev/disk/by-label/ephemeral
+ Where=/media/ephemeral
+ Type=ext4
+
+ [Install]
+ RequiredBy=local-fs.target
+```
+
+For more information about mounting storage, Amazon's [own documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) is the best source. You can also read about [mounting storage on Flatcar Container Linux](../../setup/storage/mounting-storage).
+
+### Adding more machines
+
+To add more instances to the cluster, just launch more with the same Butane Config, the appropriate security group and the AMI for that region. New instances will join the cluster regardless of region if the security groups are configured correctly.
+
+## SSH to your instances
+
+Flatcar Container Linux is set up to be a little more secure than other cloud images. By default, it uses the `core` user instead of `root` and doesn't use a password for authentication. You'll need to add an SSH key(s) via the AWS console or add keys/passwords via your Butane Config in order to log in.
+
+To connect to an instance after it's created, run:
+
+```shell
+ssh core@
+```
+
+## Multiple clusters
+
+If you would like to create multiple clusters you will need to change the "Stack Name". You can find the direct [template file on S3](https://flatcar-prod-ami-import-eu-central-1.s3.amazonaws.com/dist/aws/flatcar-stable-hvm.template).
+
+## Manual setup
+
+**TL;DR:** launch three instances of [{{< docs_amis_get_hvm "alpha" "us-east-1" >}}](https://console.aws.amazon.com/ec2/home?region=us-east-1#launchAmi={{< docs_amis_get_hvm "alpha" "us-east-1" >}}) (amd64) in **us-east-1** with a security group that has open port 22, 2379, 2380, 4001, and 7001 and the same "User Data" of each host. SSH uses the `core` user and you have [etcd][etcd-docs] and [Docker][docker-docs] to play with.
+
+### Creating the security group
+
+You need open port 2379, 2380, 7001 and 4001 between servers in the `etcd` cluster. Step by step instructions below.
+
+Note: _This step is only needed once_
+
+First we need to create a security group to allow Flatcar Container Linux instances to communicate with one another.
+
+1. Go to the [security group][sg] page in the EC2 console.
+2. Click "Create Security Group"
+ * Name: flatcar-testing
+ * Description: Flatcar Container Linux instances
+ * VPC: No VPC
+ * Click: "Yes, Create"
+3. In the details of the security group, click the `Inbound` tab
+4. First, create a security group rule for SSH
+ * Create a new rule: `SSH`
+ * Source: 0.0.0.0/0
+ * Click: "Add Rule"
+5. Add two security group rules for etcd communication
+ * Create a new rule: `Custom TCP rule`
+ * Port range: 2379
+ * Source: type "flatcar-testing" until your security group auto-completes. Should be something like "sg-8d4feabc"
+ * Click: "Add Rule"
+ * Repeat this process for port range 2380, 4001 and 7001 as well
+6. Click "Apply Rule Changes"
+
+[sg]: https://console.aws.amazon.com/ec2/home?region=us-east-1#s=SecurityGroups
+
+### Launching a test cluster
+
+We will be launching three instances, with a few parameters in the User Data, and selecting our security group.
+
+- Open the quick launch wizard to boot: }}" target="_blank">Alpha {{< docs_amis_get_hvm "alpha" "us-east-1" >}} (amd64) , }}" target="_blank">Beta {{< docs_amis_get_hvm "beta" "us-east-1" >}} (amd64) , or }}" target="_blank">Stable {{< docs_amis_get_hvm "stable" "us-east-1" >}} (amd64)
+- On the second page of the wizard, launch 3 servers to test our clustering
+ - Number of instances: 3, "Continue"
+- Paste your Ignition JSON config in the EC2 dashboard into the "User Data" field, "Continue"
+- Storage Configuration, "Continue"
+- Tags, "Continue"
+- Create Key Pair: Choose a key of your choice, it will be added in addition to the one in the gist, "Continue"
+- Choose one or more of your existing Security Groups: "flatcar-testing" as above, "Continue"
+- Launch!
+
+## Installation from a VMDK image
+
+One of the possible ways of installation is to import the generated VMDK Flatcar image as a snapshot. The image file will be in `https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2`.
+Make sure you download the signature (it's available in `https://${CHANNEL}.release.flatcar-linux.net/${ARCH}-usr/${VERSION}/flatcar_production_ami_vmdk_image.vmdk.bz2.sig`) and check it before proceeding.
+
+```shell
+$ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2
+$ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_ami_vmdk_image.vmdk.bz2.sig
+$ gpg --verify flatcar_production_ami_vmdk_image.vmdk.bz2.sig
+gpg: assuming signed data in 'flatcar_production_ami_vmdk_image.vmdk.bz2'
+gpg: Signature made Thu 15 Mar 2018 10:27:57 AM CET
+gpg: using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
+gpg: Good signature from "Flatcar Buildbot (Official Builds) " [ultimate]
+```
+
+Then, follow the instructions in [Importing a Disk as a Snapshot Using VM Import/Export](https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-import-snapshot.html). You'll need to upload the uncompressed vmdk file to S3.
+
+After the snapshot is imported, you can go to "Snapshots" in the EC2 dashboard, and generate an AMI image from it.
+To make it work, use `/dev/sda2` as the "Root device name" and you probably want to select "Hardware-assisted virtualization" as "Virtualization type".
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+## Terraform
+
+The [`aws`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) Terraform Provider allows to deploy machines in a declarative way.
+Read more about using Terraform and Flatcar [here](../../provisioning/terraform/).
+
+The following Terraform v0.13 module may serve as a base for your own setup.
+It will also take care of registering your SSH key at AWS EC2 and managing the network environment with Terraform.
+
+You can clone the setup from the [Flatcar Terraform examples repository](https://github.com/flatcar/flatcar-terraform/tree/main/aws) or create the files manually as we go through them and explain each one.
+
+```
+git clone https://github.com/flatcar/flatcar-terraform.git
+# From here on you could directly run it, TLDR:
+cd aws
+export AWS_ACCESS_KEY_ID=...
+export AWS_SECRET_ACCESS_KEY=...
+terraform init
+# Edit the server configs or just go ahead with the default example
+terraform plan
+terraform apply
+```
+
+Start with a `aws-ec2-machines.tf` file that contains the main declarations:
+
+```
+terraform {
+ required_version = ">= 0.13"
+ required_providers {
+ ct = {
+ source = "poseidon/ct"
+ version = "0.7.1"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = "~> 2.2.0"
+ }
+ null = {
+ source = "hashicorp/null"
+ version = "~> 3.0.0"
+ }
+ aws = {
+ source = "hashicorp/aws"
+ version = "~> 3.19.0"
+ }
+ }
+}
+
+provider "aws" {
+ region = var.aws_region
+}
+
+resource "aws_vpc" "network" {
+ cidr_block = var.vpc_cidr
+
+ tags = {
+ Name = var.cluster_name
+ }
+}
+
+resource "aws_subnet" "subnet" {
+ vpc_id = aws_vpc.network.id
+ cidr_block = var.subnet_cidr
+
+ tags = {
+ Name = var.cluster_name
+ }
+}
+
+resource "aws_internet_gateway" "gateway" {
+ vpc_id = aws_vpc.network.id
+
+ tags = {
+ Name = var.cluster_name
+ }
+}
+
+resource "aws_route_table" "default" {
+ vpc_id = aws_vpc.network.id
+
+ route {
+ cidr_block = "0.0.0.0/0"
+ gateway_id = aws_internet_gateway.gateway.id
+ }
+
+ tags = {
+ Name = var.cluster_name
+ }
+}
+
+resource "aws_route_table_association" "public" {
+ route_table_id = aws_route_table.default.id
+ subnet_id = aws_subnet.subnet.id
+}
+
+resource "aws_security_group" "securitygroup" {
+ vpc_id = aws_vpc.network.id
+
+ tags = {
+ Name = var.cluster_name
+ }
+}
+
+resource "aws_security_group_rule" "outgoing_any" {
+ security_group_id = aws_security_group.securitygroup.id
+ type = "egress"
+ from_port = 0
+ to_port = 0
+ protocol = "-1"
+ cidr_blocks = ["0.0.0.0/0"]
+}
+
+resource "aws_security_group_rule" "incoming_any" {
+ security_group_id = aws_security_group.securitygroup.id
+ type = "ingress"
+ from_port = 0
+ to_port = 0
+ protocol = "-1"
+ cidr_blocks = ["0.0.0.0/0"]
+}
+
+resource "aws_key_pair" "ssh" {
+ key_name = var.cluster_name
+ public_key = var.ssh_keys.0
+}
+
+data "aws_ami" "flatcar_stable_latest" {
+ most_recent = true
+ owners = ["aws-marketplace"]
+
+ filter {
+ name = "architecture"
+ values = ["x86_64"]
+ }
+
+ filter {
+ name = "virtualization-type"
+ values = ["hvm"]
+ }
+
+ filter {
+ name = "name"
+ values = ["Flatcar-stable-*"]
+ }
+}
+
+resource "aws_instance" "machine" {
+ for_each = toset(var.machines)
+ instance_type = var.instance_type
+ user_data = data.ct_config.machine-ignitions[each.key].rendered
+ ami = data.aws_ami.flatcar_stable_latest.image_id
+ key_name = aws_key_pair.ssh.key_name
+
+ associate_public_ip_address = true
+ subnet_id = aws_subnet.subnet.id
+ vpc_security_group_ids = [aws_security_group.securitygroup.id]
+
+ tags = {
+ Name = "${var.cluster_name}-${each.key}"
+ }
+}
+
+data "ct_config" "machine-ignitions" {
+ for_each = toset(var.machines)
+ content = data.template_file.machine-configs[each.key].rendered
+}
+
+data "template_file" "machine-configs" {
+ for_each = toset(var.machines)
+ template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
+
+ vars = {
+ ssh_keys = jsonencode(var.ssh_keys)
+ name = each.key
+ }
+}
+```
+
+Create a `variables.tf` file that declares the variables used above:
+
+```
+variable "machines" {
+ type = list(string)
+ description = "Machine names, corresponding to cl/machine-NAME.yaml.tmpl files"
+}
+
+variable "cluster_name" {
+ type = string
+ description = "Cluster name used as prefix for the machine names"
+}
+
+variable "ssh_keys" {
+ type = list(string)
+ description = "SSH public keys for user 'core'"
+}
+
+variable "aws_region" {
+ type = string
+ default = "us-east-2"
+ description = "AWS Region to use for running the machine"
+}
+
+variable "instance_type" {
+ type = string
+ default = "t3.medium"
+ description = "Instance type for the machine"
+}
+
+variable "vpc_cidr" {
+ type = string
+ default = "172.16.0.0/16"
+}
+
+variable "subnet_cidr" {
+ type = string
+ default = "172.16.10.0/24"
+}
+```
+
+An `outputs.tf` file shows the resulting IP addresses:
+
+```
+output "ip-addresses" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => aws_instance.machine[key].public_ip
+ }
+}
+```
+
+Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
+First create a `terraform.tfvars` file with your settings:
+
+```
+cluster_name = "mycluster"
+machines = ["mynode"]
+ssh_keys = ["ssh-rsa AA... me@mail.net"]
+```
+
+The machine name listed in the `machines` variable is used to retrieve the corresponding [Container Linux Config](https://www.flatcar.org/docs/latest/provisioning/cl-config/).
+For each machine in the list, you should have a `machine-NAME.yaml.tmpl` file with a corresponding name.
+
+For example, create the configuration for `mynode` in the file `machine-mynode.yaml.tmpl` (The SSH key used there is not really necessary since we already set it as VM attribute):
+
+```yaml
+---
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ${ssh_keys}
+storage:
+ files:
+ - path: /home/core/works
+ filesystem: root
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/bash
+ set -euo pipefail
+ # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
+ hostname="$(hostname)"
+ echo My name is ${name} and the hostname is $${hostname}
+```
+
+Finally, run Terraform v0.13 as follows to create the machine:
+
+```
+export AWS_ACCESS_KEY_ID=...
+export AWS_SECRET_ACCESS_KEY=...
+terraform init
+terraform apply
+```
+
+Log in via `ssh core@IPADDRESS` with the printed IP address (maybe add `-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`).
+
+When you make a change to `machine-mynode.yaml.tmpl` and run `terraform apply` again, the machine will be replaced.
+
+You can find this Terraform module in the repository for [Flatcar Terraform examples](https://github.com/flatcar/flatcar-terraform/tree/main/aws).
+
+[quickstart]: ../
+[doc-index]: ../../
+[flatcar-user]: https://groups.google.com/forum/#!forum/flatcar-linux-user
+[docker-docs]: https://docs.docker.io
+[etcd-docs]: https://etcd.io/docs
+[irc]: irc://irc.freenode.org:6667/#flatcar
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[ec2-user-data]: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/installing/cloud/azure.md b/content/docs/latest/installing/cloud/azure.md
new file mode 100644
index 00000000..1fdf9132
--- /dev/null
+++ b/content/docs/latest/installing/cloud/azure.md
@@ -0,0 +1,642 @@
+---
+title: Running Flatcar Container Linux on Microsoft Azure
+linktitle: Running on Microsoft Azure
+weight: 10
+aliases:
+ - ../../os/booting-on-azure
+ - ../../cloud-providers/booting-on-azure
+---
+
+## Creating resource group via Microsoft Azure CLI
+
+Follow the [installation and configuration guides][azure-cli] for the Microsoft Azure CLI to set up your local installation.
+
+Instances on Microsoft Azure must be created within a resource group. Create a new resource group with the following command:
+
+```shell
+az group create --name group-1 --location
+```
+
+Now that you have a resource group, you can choose a channel of Flatcar Container Linux you would like to install.
+
+## Using the official image from the Marketplace
+
+Official Flatcar Container Linux images for all channels are available in the Marketplace.
+Flatcar is published by the `kinvolk` publisher on Marketplace.
+Flatcar Container Linux is designed to be [updated automatically][update-docs] with different schedules per channel. Updating
+can be [disabled][reboot-docs], although it is not recommended to do so. The [release notes][release-notes] contain
+information about specific features and bug fixes.
+
+The following command will create a single instance through the Azure CLI.
+
+
+
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within
+ the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+$ az vm image list --all -p kinvolk -f flatcar -s stable # Query the image name urn specifier
+[
+ {
+ "offer": "flatcar-container-linux",
+ "publisher": "kinvolk",
+ "sku": "stable",
+ "urn": "kinvolk:flatcar-container-linux:stable:2345.3.0",
+ "version": "2345.3.0"
+ }
+]
+$ az vm create --name node-1 --resource-group group-1 --admin-username core --custom-data "$(cat config.ign)" --image kinvolk:flatcar-container-linux:stable:2345.3.0
+
+
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+$ az vm image list --all -p kinvolk -f flatcar -s beta # Query the image name urn specifier
+[
+ {
+ "offer": "flatcar-container-linux",
+ "publisher": "kinvolk",
+ "sku": "beta",
+ "urn": "kinvolk:flatcar-container-linux:beta:2411.1.0",
+ "version": "2411.1.0"
+ }
+]
+$ az vm create --name node-1 --resource-group group-1 --admin-username core --custom-data "$(cat config.ign)" --image kinvolk:flatcar-container-linux:beta:2411.1.0
+
+
+
+
+
+
The Alpha channel closely tracks the master branch and is released frequently. The newest versions of system
+ libraries and utilities are available for testing in this channel. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+$ az vm image list --all -p kinvolk -f flatcar -s alpha
+[
+ {
+ "offer": "flatcar-container-linux",
+ "publisher": "kinvolk",
+ "sku": "alpha",
+ "urn": "kinvolk:flatcar-container-linux:alpha:2430.0.0",
+ "version": "2430.0.0"
+ }
+]
+$ az vm create --name node-1 --resource-group group-1 --admin-username core --custom-data "$(cat config.ign)" --image kinvolk:flatcar-container-linux:alpha:2430.0.0
+
+
+
+
+
+
+You can use both image offers `flatcar-container-linux` and `flatcar-container-linux-free`, the contents are the same.
+The SKU, which is the third element of the image URN, relates to one of the release channels and also depends on whether to use HyperV Generation 1 or 2.
+Generation 1 instance types use the channel names `alpha`, `beta` or `stable` as is; for Generation 2 instance types please append `-gen2` to the channel name, i.e., use one of `alpha-gen2`, `beta-gen2` or `stable-gen2`.
+This means the Gen 2 image URN for the above example for a Stable release becomes `flatcar-container-linux:stable-gen2:2345.3.0`.
+
+
+Before being able to use them, you may need to accept the legal terms once, here done for `flatcar-container-linux` and `stable`:
+
+```shell
+az vm image terms show --publish kinvolk --offer flatcar-container-linux --plan stable
+az vm image terms accept --publish kinvolk --offer flatcar-container-linux --plan stable
+```
+
+### Flatcar Pro Images
+
+Flatcar Pro images were paid marketplace images that came with commercial support and extra features. All the previous features of Flatcar Pro images, such as support for NVIDIA GPUs, are now available to all users in standard Flatcar marketplace images.
+
+### Plan information for building your image from the Marketplace Image
+
+When building an image based on the Marketplace image you sometimes need to specify the original plan. The plan name is the image SKU, e.g., `stable`, the plan product is the image offer, e.g., `flatcar-container-linux-free`, and the plan publisher is the same (`kinvolk`).
+
+## Uploading your own Image
+
+To automatically download the Flatcar image for Azure from the release page and upload it to your Azure account, run the following command:
+
+```shell
+docker run -it --rm quay.io/kinvolk/azure-flatcar-image-upload \
+ --resource-group \
+ --storage-account-name
+```
+
+Where:
+
+- `` should be a valid [Resource Group][resource-group] name.
+- `` should be a valid [Storage Account][storage-account] name.
+
+During execution, the script will ask you to log into your Azure account and then create all necessary resources for
+uploading an image. It will then download the requested Flatcar Container Linux image and upload it to Azure.
+
+If uploading fails with one of the following errors, it usually indicates a problem on Azure's side:
+
+```text
+Put https://mystorage.blob.core.windows.net/vhds?restype=container: dial tcp: lookup iago-dev.blob.core.windows.net on 80.58.61.250:53: no such host
+```
+
+```text
+storage: service returned error: StatusCode=403, ErrorCode=AuthenticationFailed, ErrorMessage=Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. RequestId:a3ed1ebc-701e-010c-5258-0a2e84000000 Time:2019-05-14T13:26:00.1253383Z, RequestId=a3ed1ebc-701e-010c-5258-0a2e84000000, QueryParameterName=, QueryParameterValue=
+```
+
+The command is idempotent and it is therefore safe to re-run it in case of failure.
+
+To see all available options, run:
+
+```shell
+docker run -it --rm quay.io/kinvolk/azure-flatcar-image-upload --help
+
+Usage: /usr/local/bin/upload_images.sh [OPTION...]
+
+ Required arguments:
+ -g, --resource-group Azure resource group.
+ -s, --storage-account-name Azure storage account name. Must be between 3 and 24 characters and unique within Azure.
+
+ Optional arguments:
+ -c, --channel Flatcar Container Linux release channel. Defaults to 'stable'.
+ -v, --version Flatcar Container Linux version. Defaults to 'current'.
+ -i, --image-name Image name, which will be used later in Lokomotive configuration. Defaults to 'flatcar-'.
+ -l, --location Azure location to storage image. To list available locations run with '--locations'. Defaults to 'westeurope'.
+ -S, --storage-account-type Type of storage account. Defaults to 'Standard_LRS'.
+```
+
+The Dockerfile for the `quay.io/kinvolk/azure-flatcar-image-upload` image is managed [here][azure-flatcar-image-upload].
+
+## SSH User Setup
+
+Azure offers to provision a user account and SSH key through the WAAgent daemon that runs by default.
+In the web UI you can enter a user name for a new user and provide an SSH pub key to be set up.
+
+On the CLI you can pass the user and the SSH key as follows:
+
+```shell
+az vm create ... --admin-username myuser --ssh-key-values ~/.ssh/id_rsa.pub
+```
+
+This also works for the `core` user.
+If you plan to use the `core` user with an SSH key set up through Ignition userdata, the key argument here is not needed, and you can safely pass `--admin-username core` and no new user gets created.
+
+## Butane Config
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more
+via a Butane Config. Head over to the [provisioning docs][butane-configs] to learn how to use Butane Configs.
+Note that Microsoft Azure doesn't allow an instance's userdata to be modified after the instance had been launched. This
+isn't a problem since Ignition, the tool that consumes the userdata, only runs on the first boot.
+
+You can provide a raw Ignition JSON config (produced from a Butane Config) to Flatcar Container Linux via the Azure CLI using the `--custom-data` flag
+or in the web UI under _Custom Data_ (not _User Data_).
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+
+## Use the Azure Hyper-V Host for time synchronisation instead of NTP
+
+By default, Flatcar container Linux uses [`systemd-timesyncd`](https://www.freedesktop.org/software/systemd/man/systemd-timesyncd.service.html) for date and time synchronization, using an external NTP server as the source of accurate time.
+Azure provides an alternative for accurate time - a [PTP](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/time-sync) clock source that surfaces Azure Host time in Azure guest VMs.
+Because Azure Host time is rigorously maintained with high precision, it’s a good source against which to synchronize guest time.
+Unfortunately, systemd-timesyncd doesn’t support PTP clock sources, though there is an [upstream feature request](https://github.com/systemd/systemd/issues/22828) for adding this.
+To work around this missing feature and to use Azure's PTP clock source, we can employ [`chrony`](https://chrony.tuxfamily.org/) in an [`alpine`](https://alpinelinux.org/) container to synchronise time.
+Since alpine is relentlessly optimised for size, the container will merely take about 16MB of disk space.
+Here's a configuration snippet to create a minimal chrony container during provisioning, and use it instead of systemd-timesyncd:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /opt/chrony/Dockerfile
+ mode: 0644
+ contents:
+ inline: |
+ FROM alpine
+ RUN apk add chrony
+ RUN rm /etc/chrony/chrony.conf
+ - path: /opt/chrony/chrony.conf
+ mode: 0644
+ contents:
+ inline: |
+ log statistics measurements tracking
+ logdir /var/log/chrony
+ driftfile /var/lib/chrony/drift
+ makestep 1.0 3
+ maxupdateskew 100.0
+ dumpdir /var/lib/chrony
+ rtcsync
+ refclock PHC /dev/ptp0 poll 3 dpoll -2 offset 0 stratum 2
+ directories:
+ - path: /opt/chrony/logs
+ mode: 0777
+systemd:
+ units:
+ - name: systemd-timesyncd.service
+ mask: true
+ - name: prepare-chrony.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Build the chrony container image
+ ConditionPathExists=!/opt/chrony-build/done
+ [Service]
+ Type=oneshot
+ RemainAfterExit=true
+ Restart=on-failure
+ WorkingDirectory=/opt/chrony
+ ExecStart=/usr/bin/docker build -t chrony .
+ ExecStartPost=/usr/bin/touch done
+ [Install]
+ WantedBy=multi-user.target
+ - name: chrony.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Chrony RTC time sync service
+ After=docker.service prepare-chrony.service
+ Requires=docker.service prepare-chrony.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force chrony
+ ExecStart=/usr/bin/docker run --name chrony -i --cap-add=SYS_TIME -v /opt/chrony/logs:/var/log/chrony -v /opt/chrony/chrony.conf:/etc/chrony/chrony.conf --device=/dev/rtc:/dev/rtc --device=/dev/ptp_hyperv:/dev/ptp0 chrony chronyd -s -d
+ ExecStop=/usr/bin/docker stop chrony
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+If the above works for your use case without modifications or additions (i.e. there's no need to configure anything else) feel free to supply this ignition configuration as custom data for your deployments and call it a day:
+```json
+{
+ "ignition": {
+ "version": "3.3.0"
+ },
+ "storage": {
+ "directories": [
+ {
+ "path": "/opt/chrony/logs",
+ "mode": 511
+ }
+ ],
+ "files": [
+ {
+ "path": "/opt/chrony/Dockerfile",
+ "contents": {
+ "compression": "",
+ "source": "data:,FROM%20alpine%0ARUN%20apk%20add%20chrony%0ARUN%20rm%20%2Fetc%2Fchrony%2Fchrony.conf%0A"
+ },
+ "mode": 420
+ },
+ {
+ "path": "/opt/chrony/chrony.conf",
+ "contents": {
+ "compression": "gzip",
+ "source": "data:;base64,H4sIAAAAAAAC/0TMQW4DIQxG4b1P8V+ggSRH6KbLXsEFM0UDA7JN2ty+UhQ1q7f4pNfGBnP2al6ToQvbUulyuMGV016PjdrYclWEG2toYwvpW8dxp6y1eKlNnlK/nhIeQp13MZeJ8yniSp1/18zsYrv84BzjKVJefb7W/wNST3Y/EqmU1Eba8fnxjpDlFqbPiDlawxX50bcLRikmjghzZV8dF/oLAAD//xHNUSnZAAAA"
+ },
+ "mode": 420
+ }
+ ]
+ },
+ "systemd": {
+ "units": [
+ {
+ "mask": true,
+ "name": "systemd-timesyncd.service"
+ },
+ {
+ "contents": "[Unit]\nDescription=Build the chrony container image\nConditionPathExists=!/opt/chrony-build/done\n[Service]\nType=oneshot\nRemainAfterExit=true\nRestart=on-failure\nWorkingDirectory=/opt/chrony\nExecStart=/usr/bin/docker build -t chrony .\nExecStartPost=/usr/bin/touch done\n[Install]\nWantedBy=multi-user.target\n",
+ "enabled": true,
+ "name": "prepare-chrony.service"
+ },
+ {
+ "contents": "[Unit]\nDescription=Chrony RTC time sync service\nAfter=docker.service prepare-chrony.service\nRequires=docker.service prepare-chrony.service\n[Service]\nTimeoutStartSec=0\nExecStartPre=-/usr/bin/docker rm --force chrony\nExecStart=/usr/bin/docker run --name chrony -i --cap-add=SYS_TIME -v /opt/chrony/logs:/var/log/chrony -v /opt/chrony/chrony.conf:/etc/chrony/chrony.conf --device=/dev/rtc:/dev/rtc --device=/dev/ptp_hyperv:/dev/ptp0 chrony chronyd -s -d\nExecStop=/usr/bin/docker stop chrony\nRestart=always\nRestartSec=5s\n[Install]\nWantedBy=multi-user.target\n",
+ "enabled": true,
+ "name": "chrony.service"
+ }
+ ]
+ }
+}
+```
+
+
+## Using Flatcar Container Linux
+
+For information on using Flatcar Container Linux check out the [Flatcar Container Linux quickstart guide][quickstart] or dive into [more specific topics][docs].
+
+## Terraform
+
+The [`azurerm`](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) Terraform Provider allows to deploy machines in a declarative way.
+Read more about using Terraform and Flatcar [here](../../provisioning/terraform/).
+
+The following Terraform v0.13 module may serve as a base for your own setup.
+
+You can clone the setup from the [Flatcar Terraform examples repository](https://github.com/flatcar/flatcar-terraform/tree/main/azure) or create the files manually as we go through them and explain each one.
+
+```
+git clone https://github.com/flatcar/flatcar-terraform.git
+# From here on you could directly run it, TLDR:
+cd azure
+export ARM_SUBSCRIPTION_ID=""
+export ARM_TENANT_ID=""
+export ARM_CLIENT_ID=""
+terraform init
+# Edit the server configs or just go ahead with the default example
+terraform plan
+terraform apply
+```
+
+Start with a `azure-vms.tf` file that contains the main declarations:
+
+```
+terraform {
+ required_version = ">= 0.13"
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "~>2.0"
+ }
+ ct = {
+ source = "poseidon/ct"
+ version = "0.7.1"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = "~> 2.2.0"
+ }
+ null = {
+ source = "hashicorp/null"
+ version = "~> 3.0.0"
+ }
+ }
+}
+
+provider "azurerm" {
+ features {}
+}
+
+resource "azurerm_resource_group" "main" {
+ name = "${var.cluster_name}-rg"
+ location = var.resource_group_location
+}
+
+resource "azurerm_virtual_network" "main" {
+ name = "${var.cluster_name}-network"
+ address_space = ["10.0.0.0/16"]
+ location = azurerm_resource_group.main.location
+ resource_group_name = azurerm_resource_group.main.name
+}
+
+resource "azurerm_subnet" "internal" {
+ name = "internal"
+ resource_group_name = azurerm_resource_group.main.name
+ virtual_network_name = azurerm_virtual_network.main.name
+ address_prefixes = ["10.0.2.0/24"]
+}
+
+resource "azurerm_public_ip" "pip" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}-pip"
+ resource_group_name = azurerm_resource_group.main.name
+ location = azurerm_resource_group.main.location
+ allocation_method = "Dynamic"
+}
+
+resource "azurerm_network_interface" "main" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}-nic"
+ resource_group_name = azurerm_resource_group.main.name
+ location = azurerm_resource_group.main.location
+
+ ip_configuration {
+ name = "internal"
+ subnet_id = azurerm_subnet.internal.id
+ private_ip_address_allocation = "Dynamic"
+ public_ip_address_id = azurerm_public_ip.pip[each.key].id
+ }
+}
+
+resource "azurerm_linux_virtual_machine" "machine" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}"
+ resource_group_name = azurerm_resource_group.main.name
+ location = azurerm_resource_group.main.location
+ size = var.server_type
+ admin_username = "core"
+ custom_data = base64encode(data.ct_config.machine-ignitions[each.key].rendered)
+ network_interface_ids = [
+ azurerm_network_interface.main[each.key].id,
+ ]
+
+ admin_ssh_key {
+ username = "core"
+ public_key = var.ssh_keys.0
+ }
+
+ source_image_reference {
+ publisher = "kinvolk"
+ offer = "flatcar-container-linux"
+ sku = "stable"
+ version = var.flatcar_stable_version
+ }
+
+ plan {
+ name = "stable"
+ product = "flatcar-container-linux"
+ publisher = "kinvolk"
+ }
+
+ os_disk {
+ storage_account_type = "Standard_LRS"
+ caching = "ReadWrite"
+ }
+}
+
+data "ct_config" "machine-ignitions" {
+ for_each = toset(var.machines)
+ content = data.template_file.machine-configs[each.key].rendered
+}
+
+data "template_file" "machine-configs" {
+ for_each = toset(var.machines)
+ template = file("${path.module}/cl/machine-${each.key}.yaml.tmpl")
+
+ vars = {
+ ssh_keys = jsonencode(var.ssh_keys)
+ name = each.key
+ }
+}
+```
+
+Create a `variables.tf` file that declares the variables used above:
+
+```
+variable "resource_group_location" {
+ default = "eastus"
+ description = "Location of the resource group."
+}
+
+variable "machines" {
+ type = list(string)
+ description = "Machine names, corresponding to machine-NAME.yaml.tmpl files"
+}
+
+variable "cluster_name" {
+ type = string
+ description = "Cluster name used as prefix for the machine names"
+}
+
+variable "ssh_keys" {
+ type = list(string)
+ description = "SSH public keys for user 'core' (and to register directly with waagent for the first)"
+}
+
+variable "server_type" {
+ type = string
+ default = "Standard_D2s_v4"
+ description = "The server type to rent"
+}
+
+variable "flatcar_stable_version" {
+ type = string
+ description = "The Flatcar Stable release you want to use for the initial installation, e.g., 2605.12.0"
+}
+```
+
+An `outputs.tf` file shows the resulting IP addresses:
+
+```
+output "ip-addresses" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => azurerm_linux_virtual_machine.machine[key].public_ip_address
+ }
+}
+```
+
+Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
+First create a `terraform.tfvars` file with your settings:
+
+```
+cluster_name = "mycluster"
+machines = ["mynode"]
+ssh_keys = ["ssh-rsa AA... me@mail.net"]
+flatcar_stable_version = "x.y.z"
+resource_group_location = "westeurope"
+```
+
+You can resolve the latest Flatcar Stable version with this shell command:
+
+```
+curl -sSfL https://stable.release.flatcar-linux.net/amd64-usr/current/version.txt | grep -m 1 FLATCAR_VERSION_ID= | cut -d = -f 2
+```
+
+The machine name listed in the `machines` variable is used to retrieve the corresponding [Container Linux Config](../../provisioning/config-transpiler/configuration) template from the `cl/` subfolder.
+For each machine in the list, you should have a `machine-NAME.yaml.tmpl` file with a corresponding name.
+
+Create the configuration for `mynode` in the file `cl/machine-mynode.yaml.tmpl`:
+
+```yaml
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ${ssh_keys}
+storage:
+ files:
+ - path: /home/core/works
+ filesystem: root
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/bash
+ set -euo pipefail
+ # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
+ hostname="$(hostname)"
+ echo My name is ${name} and the hostname is $${hostname}
+```
+
+First find your subscription ID, then create a service account for Terraform and note the tenant ID, client (app) ID, client (password) secret:
+
+```
+az login
+az account set --subscription
+az ad sp create-for-rbac --name --role Contributor
+{
+ "appId": "...",
+ "displayName": "",
+ "password": "...",
+ "tenant": "..."
+}
+```
+
+Make sure you have AZ CLI version 2.32.0 if you get the error `Values of identifierUris property must use a verified domain of the organization or its subdomain`.
+AZ CLI installation docs are [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt#option-2-step-by-step-installation-instructions).
+
+Before you run Terraform, accept the image terms:
+
+```
+az vm image terms accept --urn kinvolk:flatcar-container-linux:stable:
+```
+
+Finally, run Terraform v0.13 as follows to create the machine:
+
+```
+export ARM_SUBSCRIPTION_ID=""
+export ARM_TENANT_ID=""
+export ARM_CLIENT_ID=""
+export ARM_CLIENT_SECRET=""
+terraform init
+terraform plan
+terraform apply
+```
+
+Log in via `ssh core@IPADDRESS` with the printed IP address.
+
+When you make a change to `cl/machine-mynode.yaml.tmpl` and run `terraform apply` again, the machine will be replaced.
+
+You can find this Terraform module in the repository for [Flatcar Terraform examples](https://github.com/flatcar/flatcar-terraform/tree/main/azure).
+
+[flatcar-user]: https://groups.google.com/forum/#!forum/flatcar-linux-user
+[etcd-docs]: https://etcd.io/docs
+[quickstart]: ../
+[reboot-docs]: ../../setup/releases/update-strategies
+[azure-cli]: https://docs.microsoft.com/en-us/cli/azure/overview
+[butane-configs]: ../../provisioning/config-transpiler
+[irc]: irc://irc.freenode.org:6667/#flatcar
+[docs]: ../../
+[resource-group]: https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions#naming-rules-and-restrictions
+[storage-account]: https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview#naming-storage-accounts
+[azure-flatcar-image-upload]: https://github.com/flatcar/flatcar-cloud-image-uploader
+[release-notes]: https://flatcar.org/releases
+[update-docs]: ../../setup/releases/update-strategies
diff --git a/content/docs/latest/installing/cloud/digitalocean.md b/content/docs/latest/installing/cloud/digitalocean.md
new file mode 100644
index 00000000..2fac9ed2
--- /dev/null
+++ b/content/docs/latest/installing/cloud/digitalocean.md
@@ -0,0 +1,391 @@
+---
+title: Running Flatcar Container Linux on DigitalOcean
+linktitle: Running on DigitalOcean
+weight: 20
+aliases:
+ - ../../os/booting-on-digitalocean
+ - ../../cloud-providers/booting-on-digitalocean
+---
+
+On Digital Ocean, users can upload Flatcar Container Linux as a [custom image](https://www.digitalocean.com/docs/images/custom-images/). Digital Ocean offers a [quick start guide](https://www.digitalocean.com/docs/images/custom-images/quickstart/) that walks you through the process.
+
+{{}} In some cases upload of bzip2 compressed custom images has been seen to timeout/fail. In those cases we recommend re-compressing the image files using `gzip` and uploading to a custom location. {{ }}
+
+The _import URL_ should be `https://.release.flatcar-linux.net/amd64-usr//flatcar_production_digitalocean_image.bin.bz2`. See the [release page](https://www.flatcar-linux.org/releases/) for version and channel history.
+
+For more details, check out [Launching via the API](#via-the-api).
+
+At the end of the document there are instructions for deploying with Terraform.
+
+
+
+[reboot-docs]: ../../setup/releases/update-strategies
+[release-notes]: https://www.flatcar-linux.org/releases/
+
+## Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][butane-configs]. Note that DigitalOcean doesn't allow an instance's userdata to be modified after the instance has been launched. This isn't a problem since Ignition only runs on the first boot.
+
+You can provide a raw Ignition JSON config to Flatcar Container Linux via the DigitalOcean web console or [via the DigitalOcean API](#via-the-api).
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+### Adding more machines
+
+To add more instances to the cluster, just launch more with the same Butane Config. New instances will join the cluster regardless of region.
+
+## SSH to your droplets
+
+Container Linux is set up to be a little more secure than other DigitalOcean images. By default, it uses the core user instead of root and doesn't use a password for authentication. You'll need to add an SSH key(s) via the web console or add keys/passwords via your Ignition config in order to log in.
+
+To connect to a droplet after it's created, run:
+
+```shell
+ssh core@
+```
+
+## Launching droplets
+
+### Via the API
+
+For starters, generate a [Personal Access Token][do-token-settings] and save it in an environment variable:
+
+```shell
+read TOKEN
+# Enter your Personal Access Token
+```
+
+Upload your SSH key via [DigitalOcean's API][do-keys-docs] or the web console. Retrieve the SSH key ID via the ["list all keys"][do-list-keys-docs] method:
+
+```shell
+curl --request GET "https://api.digitalocean.com/v2/account/keys" \
+ --header "Authorization: Bearer $TOKEN"
+```
+
+Save the key ID from the previous command in an environment variable:
+
+```shell
+read SSH_KEY_ID
+# Enter your SSH key ID
+```
+
+If not done yet, [create a custom image](https://developers.digitalocean.com/documentation/v2/#create-a-custom-image) from the current Flatcar Container Linux Stable version:
+
+```shell
+VER=$(curl https://stable.release.flatcar-linux.net/amd64-usr/current/version.txt | grep -m 1 FLATCAR_VERSION_ID= | cut -d = -f 2)
+curl --request POST "https://api.digitalocean.com/v2/images" \
+ --header "Content-Type: application/json" \
+ --header "Authorization: Bearer $TOKEN" \
+ --data '{
+ "name": "flatcar-stable-'$VER'",
+ "url": "https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_digitalocean_image.bin.bz2",
+ "distribution": "CoreOS",
+ "region": "nyc3",
+ "description": "Flatcar Container Linux",
+ "tags":["stable"]}'
+```
+
+Save the numeric image ID from the previous command in an environment variable:
+
+```shell
+read IMAGE_ID
+```
+
+Create a 512MB droplet with private networking in NYC3 from the image create above and an Ignition JSON configuration file `config.ign` in your current directory:
+
+```shell
+curl --request POST "https://api.digitalocean.com/v2/droplets" \
+ --header "Content-Type: application/json" \
+ --header "Authorization: Bearer $TOKEN" \
+ --data '{
+ "region":"nyc3",
+ "image":"'$IMAGE_ID'",
+ "size":"512mb",
+ "name":"core-1",
+ "private_networking":true,
+ "ssh_keys":['$SSH_KEY_ID'],
+ "user_data": "'"$(cat config.ign | sed 's/"/\\"/g')"'"
+}'
+
+```
+
+For more details, check out [DigitalOcean's API documentation][do-api-docs].
+### Via the web console
+
+1. Open the ["new droplet"](https://cloud.digitalocean.com/droplets/new?image=flatcar-stable) page in the web console.
+2. Give the machine a hostname, select the size, and choose a region.
+
+
+
+
Choosing a size and hostname
+
+
+3. Enable User Data and add your Ignition config in the text box.
+
+
+
+
Droplet settings for networking and Ignition
+
+
+4. Choose your preferred channel of Container Linux.
+
+
+
+
Choosing a Container Linux channel
+
+
+5. Select your SSH keys.
+
+Note that DigitalOcean is not able to inject a root password into Flatcar Container Linux images like it does with other images. You'll need to add your keys via the web console or add keys or passwords via your Butane Config in order to log in.
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quick-start] guide or dig into [more specific topics][docs].
+## Terraform
+
+The [`digitalocean`](https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs) Terraform Provider allows to deploy machines in a declarative way.
+Read more about using Terraform and Flatcar [here](../../provisioning/terraform/).
+
+The following Terraform v0.13 module may serve as a base for your own setup.
+It will also take care of registering your SSH key at Digital Ocean and creating a custom image.
+
+You can clone the setup from the [Flatcar Terraform examples repository](https://github.com/flatcar/flatcar-terraform/tree/main/digitalocean) or create the files manually as we go through them and explain each one.
+
+```
+git clone https://github.com/flatcar/flatcar-terraform.git
+# From here on you could directly run it, TLDR:
+cd digitalocean
+export DIGITALOCEAN_TOKEN=...
+terraform init
+# Edit the server configs or just go ahead with the default example
+terraform plan
+terraform apply
+```
+
+Start with a `digitaloecan-droplets.tf` file that contains the main declarations:
+
+```
+terraform {
+ required_version = ">= 0.13"
+ required_providers {
+ digitalocean = {
+ source = "digitalocean/digitalocean"
+ version = "2.5.1"
+ }
+ ct = {
+ source = "poseidon/ct"
+ version = "0.7.1"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = "~> 2.2.0"
+ }
+ null = {
+ source = "hashicorp/null"
+ version = "~> 3.0.0"
+ }
+ }
+}
+
+resource "digitalocean_ssh_key" "first" {
+ name = var.cluster_name
+ public_key = var.ssh_keys.0
+}
+
+resource "digitalocean_custom_image" "flatcar" {
+ name = "flatcar-stable-${var.flatcar_stable_version}"
+ url = "https://stable.release.flatcar-linux.net/amd64-usr/${var.flatcar_stable_version}/flatcar_production_digitalocean_image.bin.bz2"
+ regions = [var.datacenter]
+}
+
+resource "digitalocean_droplet" "machine" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}"
+ image = digitalocean_custom_image.flatcar.id
+ region = var.datacenter
+ size = var.server_type
+ ssh_keys = [digitalocean_ssh_key.first.fingerprint]
+ user_data = data.ct_config.machine-ignitions[each.key].rendered
+}
+
+data "ct_config" "machine-ignitions" {
+ for_each = toset(var.machines)
+ content = data.template_file.machine-configs[each.key].rendered
+}
+
+data "template_file" "machine-configs" {
+ for_each = toset(var.machines)
+ template = file("${path.module}/machine-${each.key}.yaml.tmpl")
+
+ vars = {
+ ssh_keys = jsonencode(var.ssh_keys)
+ name = each.key
+ }
+}
+```
+
+Create a `variables.tf` file that declares the variables used above:
+
+```
+variable "machines" {
+ type = list(string)
+ description = "Machine names, corresponding to machine-NAME.yaml.tmpl files"
+}
+
+variable "cluster_name" {
+ type = string
+ description = "Cluster name used as prefix for the machine names"
+}
+
+variable "ssh_keys" {
+ type = list(string)
+ description = "SSH public keys for user 'core' (and to register on Digital Ocean for the first)"
+}
+
+variable "server_type" {
+ type = string
+ default = "s-1vcpu-1gb"
+ description = "The server type to rent"
+}
+
+variable "datacenter" {
+ type = string
+ description = "The region to deploy in"
+}
+
+variable "flatcar_stable_version" {
+ type = string
+ description = "The Flatcar Stable release you want to use for the initial installation, e.g., 2605.12.0"
+}
+```
+
+An `outputs.tf` file shows the resulting IP addresses:
+
+```
+output "ip-addresses" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => digitalocean_droplet.machine[key].ipv4_address
+ }
+}
+```
+
+Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
+First create a `terraform.tfvars` file with your settings:
+
+```
+cluster_name = "mycluster"
+machines = ["mynode"]
+datacenter = "nyc3"
+ssh_keys = ["ssh-rsa AA... me@mail.net"]
+flatcar_stable_version = "x.y.z"
+```
+
+You can resolve the latest Flatcar Stable version with this shell command:
+
+```shell
+curl -sSfL https://stable.release.flatcar-linux.net/amd64-usr/current/version.txt | grep -m 1 FLATCAR_VERSION_ID= | cut -d = -f 2
+```
+
+The machine name listed in the `machines` variable is used to retrieve the corresponding [Butane Config](https://www.flatcar.org/docs/latest/provisioning/config-transpiler/configuration/).
+For each machine in the list, you should have a `machine-NAME.yaml.tmpl` file with a corresponding name.
+
+For example, create the configuration for `mynode` in the file `machine-mynode.yaml.tmpl`:
+
+```yaml
+---
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ${ssh_keys}
+storage:
+ files:
+ - path: /home/core/works
+ filesystem: root
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/bash
+ set -euo pipefail
+ # This script demonstrates how templating and variable substitution works when using Terraform templates for Container Linux Configs.
+ hostname="$(hostname)"
+ echo My name is ${name} and the hostname is $${hostname}
+```
+
+Finally, run Terraform v0.13 as follows to create the machine:
+
+```
+export DIGITALOCEAN_TOKEN=...
+terraform init
+terraform apply
+```
+
+Log in via `ssh core@IPADDRESS` with the printed IP address (maybe add `-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`).
+
+When you make a change to `machine-mynode.yaml.tmpl` and run `terraform apply` again, the machine will be replaced.
+
+You can find this Terraform module in the repository for [Flatcar Terraform examples](https://github.com/flatcar/flatcar-terraform/tree/main/digitalocean).
+
+[butane-configs]: ../../provisioning/config-transpiler
+[do-api-docs]: https://developers.digitalocean.com/documentation/v2/
+[do-keys-docs]: https://developers.digitalocean.com/documentation/v2/#ssh-keys
+[do-list-keys-docs]: https://developers.digitalocean.com/documentation/v2/#list-all-keys
+[do-token-settings]: https://cloud.digitalocean.com/account/api/tokens
+[quick-start]: ../
+[docs]: ../../
diff --git a/content/docs/latest/installing/cloud/equinix-metal.md b/content/docs/latest/installing/cloud/equinix-metal.md
new file mode 100644
index 00000000..9e0a24cc
--- /dev/null
+++ b/content/docs/latest/installing/cloud/equinix-metal.md
@@ -0,0 +1,319 @@
+---
+title: Running Flatcar Container Linux on Equinix Metal
+linktitle: Running on Equinix Metal
+weight: 10
+aliases:
+ - ../../os/booting-on-packet
+ - ../../cloud-providers/booting-on-packet
+---
+
+Equinix Metal (formerly known as Packet) is a bare metal cloud hosting provider. Flatcar Container Linux is installable as one of the default operating system options. You can deploy Flatcar Container Linux servers via the web portal or API. At the end of the document there are instructions for deploying with Terraform.
+
+## Deployment instructions
+
+The first step in deploying any devices on Equinix Metal is to first create an account and decide if you'd like to deploy via our portal or API. The portal is appropriate for small clusters of machines that won't change frequently. If you'll be deploying a lot of machines, or expect your workload to change frequently it is much more efficient to use the API. You can generate an API token through the portal once you've set up an account and payment method.
+
+### Projects
+
+Equinix Metal has a concept of 'projects' that represent a grouping of machines that defines several other aspects of the service. A project defines who on the team has access to manage the machines in your account. Projects also define your private network; all machines in a given project will automatically share backend network connectivity. The SSH keys of all team members associated with a project will be installed to all newly provisioned machines in a project. All servers need to be in a project, even if there is only one server in that project.
+
+### Portal instructions
+
+Once logged into the portal you will be able to click the 'New server' button and choose Flatcar Container Linux from the menu of operating systems, and choose which region you want the server to be deployed in. If you choose to enter a custom Ignition config, you can enable 'Add User Data' and paste it there. The SSH key that you associate with your account and any other team member's keys that are on the project will be added to your Flatcar Container Linux machine once it is provisioned.
+
+### API instructions
+
+If you select to use the API to provision machines on Equinix Metal you should consider using [one of the language libraries](https://metal.equinix.com/developers/docs/libraries/) to code against. As an example, this is how you would launch a single Type 1 machine in a curl command. [API Documentation](https://metal.equinix.com/developers/api/).
+
+```shell
+# Replace items in brackets () with the appropriate values.
+
+curl -X POST \
+-H 'Content-Type: application/json' \
+-H 'Accept: application/json' \
+-H 'X-Auth-Token: ' \
+-d '{"hostname": "", "plan": "c3.small.x86", "facility": "da11", "operating_system": "flatcar_stable", "userdata": ""}' \
+https://api.equinix.com/metal/v1/projects//devices
+```
+
+Double quotes in the `` value must be escaped such that the request body is valid JSON. See the Butane Config section below for more information about accepted forms of userdata.
+
+## iPXE booting
+
+If you need to run a Flatcar Container Linux image which is not available through the OS option in the API, you can boot via 'Custom iPXE'.
+This is the case for ARM64 images right now as they are not available via Equinix Metal's API.
+
+Assuming you want to run boot an Alpha image via iPXE on a `c2.large.arm` machine, you have to provide this URL for 'Custom iPXE Settings':
+
+```text
+https://alpha.release.flatcar-linux.net/arm64-usr/current/flatcar_production_packet.ipxe
+```
+
+Do not forget to provide an Ignition config with your SSH key because the PXE images don't have any OEM packages which could fetch the Equinix Metal Project's SSH keys after booting.
+
+If not configured elsewise, iPXE booting will only done at the first boot because you are expected to install the operating system to the hard disk yourself.
+
+## Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][butane-configs]. Note that Equinix Metal doesn't allow an instance's userdata to be modified after the instance has been launched. This isn't a problem since Ignition only runs on the first boot.
+
+You can provide a raw Ignition JSON config to Flatcar Container Linux via Equinix Metal's userdata field.
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+## Disabling/enabling autologin
+
+Beginning with Flatcar major version 3185 the `kernelArguments` directive in Ignition v3 allows to add/remove the `flatcar.autologin` kernel command line parameter that is set in `grub.cfg`.
+The following short Butane YAML config (to be transpiled to Ignition v3 JSON) ensures that the `flatcar.autologin` kernel parameter gets removed and then as part of the first boot it gets applied to an instant reboot before the instance comes up:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+kernel_arguments:
+ should_not_exist:
+ - flatcar.autologin
+```
+
+With `should_exist` instead of `should_not_exist` the argument would be added if it isn't set in `grub.cfg` already.
+
+Read more about setting kernel command line parameters this way [here](../../../setup/customization/other-settings/#adding-custom-kernel-boot-options).
+
+In case you want to disable the autologin on the console with Ignition v2 where no `kernelArguments` directive exists, you can use the following directive in your [Container Linux Config][cl-configs] YAML utilizing [ct][ct].
+To take effect it requires an additional reboot.
+
+```yaml
+storage:
+ filesystems:
+ - name: oem
+ mount:
+ device: /dev/disk/by-label/OEM
+ format: btrfs
+ label: OEM
+ files:
+ - path: /grub.cfg
+ filesystem: oem
+ mode: 0644
+ append: true
+ contents:
+ inline: |
+ set linux_append=""
+```
+
+To take effect directly on first boot, the alternative is to create a `getty@.service` drop-in, here a CLC snippet:
+
+```
+systemd:
+ units:
+ - name: getty@.service
+ dropins:
+ - name: 10-autologin.conf
+ contents: |
+ [Service]
+ ExecStart=
+ ExecStart=-/sbin/agetty --noclear %I $TERM
+```
+
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+## Terraform
+
+The [`metal`](https://registry.terraform.io/providers/equinix/metal/latest/docs) Terraform Provider allows to deploy machines in a declarative way.
+Read more about using Terraform and Flatcar [here](../../provisioning/terraform/).
+
+The following Terraform v0.13 module may serve as a base for your own setup.
+
+You can clone the setup from the [Flatcar Terraform examples repository](https://github.com/flatcar/flatcar-terraform/tree/main/equinix-metal-aka-packet) or create the files manually as we go through them and explain each one.
+
+```
+git clone https://github.com/flatcar/flatcar-terraform.git
+# From here on you could directly run it, TLDR:
+cd equinix-metal-aka-packet
+export METAL_AUTH_TOKEN=...
+terraform init
+# Edit the server configs or just go ahead with the default example
+terraform plan
+terraform apply
+```
+
+Start with a `metal-machines.tf` file that contains the main declarations:
+
+```
+terraform {
+ required_version = ">= 0.13"
+ required_providers {
+ metal = {
+ source = "equinix/metal"
+ version = "3.3.0-alpha.1"
+ }
+ ct = {
+ source = "poseidon/ct"
+ version = "0.7.1"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = "~> 2.2.0"
+ }
+ }
+}
+
+resource "metal_device" "machine" {
+ for_each = toset(var.machines)
+ hostname = "${var.cluster_name}-${each.key}"
+ plan = var.plan
+ facilities = var.facilities
+ operating_system = "flatcar_stable"
+ billing_cycle = "hourly"
+ project_id = var.project_id
+ user_data = data.ct_config.machine-ignitions[each.key].rendered
+}
+
+data "ct_config" "machine-ignitions" {
+ for_each = toset(var.machines)
+ content = data.template_file.machine-configs[each.key].rendered
+}
+
+data "template_file" "machine-configs" {
+ for_each = toset(var.machines)
+ template = file("${path.module}/machine-${each.key}.yaml.tmpl")
+
+ vars = {
+ ssh_keys = jsonencode(var.ssh_keys)
+ name = each.key
+ }
+}
+```
+
+Create a `variables.tf` file that declares the variables used above:
+
+```
+variable "machines" {
+ type = list(string)
+ description = "Machine names, corresponding to machine-NAME.yaml.tmpl files"
+}
+
+variable "cluster_name" {
+ type = string
+ description = "Cluster name used as prefix for the machine names"
+}
+
+variable "ssh_keys" {
+ type = list(string)
+ description = "SSH public keys for user 'core', only needed if you don't have it specified in the Equinix Metal Project"
+}
+
+variable "facilities" {
+ type = list(string)
+ default = ["sjc1"]
+ description = "List of facility codes with deployment preferences"
+}
+
+variable "plan" {
+ type = string
+ default = "t1.small.x86"
+ description = "The device plan slug"
+}
+
+variable "project_id" {
+ type = string
+ description = "The Equinix Metal Project to deploy in (in the web UI URL after /projects/)"
+}
+```
+
+An `outputs.tf` file shows the resulting IP addresses:
+
+```
+output "ip-addresses" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => metal_device.machine[key].access_public_ipv4
+ }
+}
+```
+
+Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
+First create a `terraform.tfvars` file with your settings:
+
+```
+cluster_name = "mycluster"
+machines = ["mynode"]
+plan = "t1.small.x86"
+facilities = ["sjc1"]
+project_id = "1...-2...-3...-4...-5..."
+ssh_keys = ["ssh-rsa AA... me@mail.net"]
+```
+
+Create the configuration for `mynode` in the file `machine-mynode.yaml.tmpl`:
+
+```yaml
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ${ssh_keys}
+storage:
+ files:
+ - path: /home/core/works
+ filesystem: root
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/bash
+ set -euo pipefail
+ hostname="$(hostname)"
+ echo My name is ${name} and the hostname is $${hostname}
+
+```
+
+Finally, run Terraform v0.13 as follows to create the machine:
+
+```
+export METAL_AUTH_TOKEN=...
+terraform init
+terraform apply
+```
+
+Log in via `ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@IPADDRESS` with the printed IP address.
+
+When you make a change to `machine-mynode.yaml.tmpl` and run `terraform apply` again, the machine will be replaced.
+
+It is recommended to register your SSH key in the Equinix Metal Project to use the out-of-band console. Since Flatcar will fetch this key, too, you can remove it from the YAML config.
+
+You can find this Terraform module in the repository for [Flatcar Terraform examples](https://github.com/flatcar/flatcar-terraform/tree/main/equinix-metal-aka-packet).
+
+
+[quickstart]: ../
+[doc-index]: ../../
+[butane-configs]: ../../provisioning/config-transpiler/
+[cl-configs]: ../../provisioning/cl-config
+[ct]: https://github.com/flatcar/container-linux-config-transpiler
diff --git a/content/docs/latest/installing/cloud/gcp.md b/content/docs/latest/installing/cloud/gcp.md
new file mode 100644
index 00000000..b8fb0d4c
--- /dev/null
+++ b/content/docs/latest/installing/cloud/gcp.md
@@ -0,0 +1,301 @@
+---
+title: Running Flatcar Container Linux on Google Compute Engine
+linktitle: Running on Google Compute Engine
+weight: 15
+aliases:
+ - ../../os/booting-on-google-compute-engine
+ - ../../cloud-providers/booting-on-google-compute-engine
+---
+
+Before proceeding, you will need a GCE account ([GCE free trial][free-trial]) and [install gcloud][gcloud-documentation] on your machine. In each command below, be sure to insert your project name in place of ``.
+
+[gce-advanced-os]: http://developers.google.com/compute/docs/transition-v1#customkernelbinaries
+[gcloud-documentation]: https://cloud.google.com/sdk/
+[free-trial]: https://cloud.google.com/free-trial/?utm_source=flatcar&utm_medium=partners&utm_campaign=partner-free-trial
+
+After installation, log into your account with `gcloud auth login` and enter your project ID when prompted.
+
+Flatcar is published by the `kinvolk` publisher on GCE.
+
+## Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes](https://flatcar-linux.org/releases) for specific features and bug fixes.
+
+Create 3 instances from the image above using our Ignition from `example.ign`:
+
+
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
gcloud compute instances create flatcar1 flatcar2 flatcar3 --image-project kinvolk-public --image-family flatcar-stable --zone us-central1-a --machine-type n1-standard-1 --metadata-from-file user-data=config.ign
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
gcloud compute instances create flatcar1 flatcar2 flatcar3 --image-project kinvolk-public --image-family flatcar-beta --zone us-central1-a --machine-type n1-standard-1 --metadata-from-file user-data=config.ign
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
gcloud compute instances create flatcar1 flatcar2 flatcar3 --image-project kinvolk-public --image-family flatcar-alpha --zone us-central1-a --machine-type n1-standard-1 --metadata-from-file user-data=config.ign
+
+
+
+
+## Uploading an Image
+
+If you prefer, you can also run Flatcar Container Linux by uploading a custom image to your account.
+
+To do so, run the following command:
+
+```shell
+docker run -it quay.io/kinvolk/google-cloud-flatcar-image-upload \
+ --bucket-name \
+ --project-id
+```
+
+Where:
+
+- `` should be a valid [bucket][bucket] name.
+- `` should be your project ID.
+
+During execution, the script will ask you to log into your Google account and then create all necessary resources for
+uploading an image. It will then download the requested Flatcar Container Linux image and upload it to the Google Cloud.
+
+To see all available options, run:
+
+```shell
+docker run -it quay.io/kinvolk/google-cloud-flatcar-image-upload --help
+
+Usage: /usr/local/bin/upload_images.sh [OPTION...]
+
+ Required arguments:
+ -b, --bucket-name Name of GCP bucket for storing images.
+ -p, --project-id ID of the project for creating bucket.
+
+ Optional arguments:
+ -c, --channel Flatcar Container Linux release channel. Defaults to 'stable'.
+ -v, --version Flatcar Container Linux version. Defaults to 'current'.
+ -i, --image-name Image name, which will be used later in Lokomotive configuration. Defaults to 'flatcar-'.
+
+ Optional flags:
+ -f, --force-reupload If used, image will be uploaded even if it already exist in the bucket.
+ -F, --force-recreate If user, if compute image already exist, it will be removed and recreated.
+```
+
+The Dockerfile for the `quay.io/kinvolk/google-cloud-flatcar-image-upload` image is managed [here][google-cloud-flatcar-image-upload].
+
+[bucket]: https://cloud.google.com/storage/docs/key-terms#bucket-names
+[google-cloud-flatcar-image-upload]: https://github.com/flatcar/flatcar-cloud-image-uploader/blob/master/google-cloud-flatcar-image-upload
+
+## Upgrade from CoreOS Container Linux
+
+You can also [upgrade from an existing CoreOS Container Linux system](./update-from-container-linux).
+
+## Butane Config
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][butane-configs].
+
+You can provide a raw Ignition JSON config to Flatcar Container Linux via the Google Cloud console's metadata field `user-data` or via a flag using `gcloud`.
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:latest > ignition.json
+```
+### Additional storage
+
+Additional disks attached to instances can be mounted with a `.mount` unit. Each disk can be accessed via `/dev/disk/by-id/google-`. Here's the Butane Config to format and mount a disk called `database-backup`:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: /dev/disk/by-id/scsi-0Google_PersistentDisk_database-backup
+ format: ext4
+systemd:
+ units:
+ - name: media-backup.mount
+ enabled: true
+ contents: |
+ [Mount]
+ What=/dev/disk/by-id/scsi-0Google_PersistentDisk_database-backup
+ Where=/media/backup
+ Type=ext4
+
+ [Install]
+ RequiredBy=local-fs.target
+```
+
+For more information about mounting storage, Google's [own documentation](https://developers.google.com/compute/docs/disks#attach_disk) is the best source. You can also read about [mounting storage on Flatcar Container Linux][mounting-storage].
+
+### Adding more machines
+
+To add more instances to the cluster, just launch more with the same Ignition config inside of the project.
+
+## SSH and users
+
+Users are added to Container Linux on GCE by the user provided configuration (i.e. Ignition, cloudinit) and by either the GCE account manager or [GCE OS Login](https://cloud.google.com/compute/docs/instances/managing-instance-access). OS Login is used if it is enabled for the instance, otherwise the GCE account manager is used.
+
+### Using the GCE account manager
+
+You can log in your Flatcar Container Linux instances using:
+
+```sh
+gcloud compute ssh --zone us-central1-a core@
+```
+
+Users other than `core`, which are set up by the GCE account manager, may not be a member of required groups. If you have issues, try running commands such as `journalctl` with sudo.
+
+### Using OS Login
+
+You can log in using your Google account on instances with OS Login enabled. OS Login needs to be [enabled in the GCE console](https://cloud.google.com/compute/docs/instances/managing-instance-access#enable_oslogin) and on the instance. It is enabled by default on instances provisioned with Container Linux 1898.0.0 or later. Once enabled, you can log into your Container Linux instances using:
+
+```sh
+gcloud compute ssh --zone us-central1-a
+```
+
+This will use your GCE user to log in.
+
+#### Disabling OS Login on newly provisioned nodes
+
+You can disable the OS Login functionality by masking the `oem-gce-enable-oslogin.service` unit:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: oem-gce-enable-oslogin.service
+ mask: true
+```
+
+When disabling OS Login functionality on the instance, it is also recommended to disable it in the GCE console.
+
+## Monitoring
+
+Flatcar isn't a supported distro for the
+[Google Ops Agent](https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent)
+, as it's designed for traditional operating systems and monitoring the
+processes running on them.
+
+It's likely however that there will be metrics within Flatcar that will be
+useful additions to VM metrics in Google Cloud Monitoring.
+
+### GCP Custom Metrics
+
+Google provide an API and SDKs to ingest custom metrics. For example this
+Python script will send CPU load average and root volume utilisation
+every minute:
+
+**gcp_custom_metrics.py**
+
+```python
+#!/usr/bin/env python3
+from google.cloud import monitoring_v3
+
+import time
+import os
+import shutil
+import requests
+
+metadata_server = "http://metadata/computeMetadata/v1/"
+metadata_flavor = {'Metadata-Flavor' : 'Google'}
+
+gce_name = requests.get(metadata_server + 'instance/hostname', headers = metadata_flavor).text
+gce_project = requests.get(metadata_server + 'project/project-id', headers = metadata_flavor).text
+split_gce_name=gce_name.split(".",2)
+
+client = monitoring_v3.MetricServiceClient()
+project_id = gce_project
+project_name = f"projects/{project_id}"
+
+load_series = monitoring_v3.TimeSeries()
+load_series.metric.type = "custom.googleapis.com/node_load"
+load_series.resource.type = "gce_instance"
+load_series.resource.labels["instance_id"] = split_gce_name[0]
+load_series.resource.labels["zone"] = split_gce_name[1]
+
+du_series = monitoring_v3.TimeSeries()
+du_series.metric.type = "custom.googleapis.com/root_volume_usage"
+du_series.resource.type = "gce_instance"
+du_series.resource.labels["instance_id"] = split_gce_name[0]
+du_series.resource.labels["zone"] = split_gce_name[1]
+
+while True:
+ load1, load5, load15 = os.getloadavg()
+ root_total, root_used, root_free = shutil.disk_usage("/")
+
+ now = time.time()
+ seconds = int(now)
+ nanos = int((now - seconds) * 10 ** 9)
+ interval = monitoring_v3.TimeInterval(
+ {"end_time": {"seconds": seconds, "nanos": nanos}}
+ )
+ load_point = monitoring_v3.Point({"interval": interval, "value": {"double_value": load5}})
+ load_series.points = [load_point]
+ client.create_time_series(request={"name": project_name, "time_series": [load_series]})
+
+ du_point = monitoring_v3.Point({"interval": interval, "value": {"double_value": root_used/root_total}})
+ du_series.points = [du_point]
+ client.create_time_series(request={"name": project_name, "time_series": [du_series]})
+
+ time.sleep(60)
+```
+
+The script can then be packaged up into a Dockerfile:
+
+**Dockerfile**
+
+```dockerfile
+FROM python:3-slim
+
+WORKDIR /usr/src/app
+
+RUN pip3 install --no-cache-dir google-cloud-monitoring
+
+COPY gcp_custom_metrics.py .
+
+CMD [ "python3", "./gcp_custom_metrics.py" ]
+```
+
+The resulting image can then be deployed to a container on each Flatcar node.
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[mounting-storage]: ../../setup/storage/mounting-storage
+[quickstart]: ../
+[doc-index]: ../../
+[update-strategies]: ../../setup/releases/update-strategies
+[cl-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/installing/cloud/hetzner.md b/content/docs/latest/installing/cloud/hetzner.md
new file mode 100644
index 00000000..2f1595e3
--- /dev/null
+++ b/content/docs/latest/installing/cloud/hetzner.md
@@ -0,0 +1,339 @@
+---
+title: Running Flatcar Container Linux on Hetzner
+linktitle: Running on Hetzner
+weight: 20
+aliases:
+ - ../../os/booting-on-hetzner
+ - ../../cloud-providers/booting-on-hetzner
+---
+
+[Hetzner Cloud](https://www.hetzner.com/cloud) is a cloud hosting provider.
+Flatcar Container Linux is not installable as one of the default operating system options but you can deploy it by installing it through the rescue OS.
+At the end of the document there are instructions for deploying with Terraform.
+
+## Preparations
+
+Register your SSH key in the Hetzner web interface to be able to log in to a machine.
+
+For programatic access, create an API token (e.g., used with Terraform as `HCLOUD_TOKEN` environment variable).
+
+## Provisioning
+
+Select any OS like Debian when you create the instance but boot into the `linux64` rescue OS.
+Connect via SSH and download and run the `flatcar-install` script:
+
+```sh
+apt update
+apt -y install gawk
+curl -fsSLO --retry-delay 1 --retry 60 --retry-connrefused --retry-max-time 60 --connect-timeout 20 https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install
+chmod +x flatcar-install
+./flatcar-install -s -i ignition.json # optional: you may provide a Ignition Config as file, it should contain your SSH key
+shutdown -r +1 # reboot into Flatcar
+```
+
+## Terraform
+
+The [`hcloud`](https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs) Terraform Provider allows to deploy machines in a declarative way.
+Read more about using Terraform and Flatcar [here](../../provisioning/terraform/).
+
+The following Terraform v0.13 module may serve as a base for your own setup.
+It will also auto-generate an SSH key for this deployment, and register it with Hetzner.
+
+Since Flatcar does not yet natively support Hetzner metadata, automation will boot into the node's rescue OS during deployment, and install Flatcar from there.
+
+You can clone the setup from the [Flatcar Terraform examples repository](https://github.com/flatcar/flatcar-terraform/tree/main/flatcar-terraform-hetzner) or create the files manually as we go through them and explain each one.
+
+```
+git clone https://github.com/flatcar/flatcar-terraform.git
+# From here on you could directly run it, TLDR:
+cd flatcar-terraform-hetzner
+export HCLOUD_TOKEN=...
+terraform init
+# Edit the server configs or just go ahead with the default example
+terraform plan
+terraform apply
+```
+
+Start with a `hetzner-machines.tf` file that contains the main declarations:
+
+```
+resource "tls_private_key" "provisioning" {
+ algorithm = "RSA"
+ rsa_bits = 4096
+}
+
+resource "hcloud_ssh_key" "provisioning_key" {
+ name = "Provisioning key for Flatcar cluster '${var.cluster_name}'"
+ public_key = tls_private_key.provisioning.public_key_openssh
+}
+
+resource "local_file" "provisioning_key" {
+ filename = "${path.module}/.ssh/provisioning_private_key.pem"
+ content = tls_private_key.provisioning.private_key_pem
+ directory_permission = "0700"
+ file_permission = "0400"
+}
+
+resource "local_file" "provisioning_key_pub" {
+ filename = "${path.module}/.ssh/provisioning_key.pub"
+ content = tls_private_key.provisioning.public_key_openssh
+ directory_permission = "0700"
+ file_permission = "0440"
+}
+
+
+resource "hcloud_server" "machine" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}"
+ ssh_keys = [hcloud_ssh_key.provisioning_key.id]
+ # boot into rescue OS
+ rescue = "linux64"
+ # dummy value for the OS because Flatcar is not available
+ image = "debian-11"
+ server_type = var.server_type
+ location = var.location
+ connection {
+ host = self.ipv4_address
+ private_key = tls_private_key.provisioning.private_key_pem
+ timeout = "1m"
+ }
+ provisioner "file" {
+ content = data.ct_config.machine-ignitions[each.key].rendered
+ destination = "/root/ignition.json"
+ }
+
+ provisioner "remote-exec" {
+ inline = [
+ "set -ex",
+ "apt update",
+ "apt install -y gawk",
+ "curl -fsSLO --retry-delay 1 --retry 60 --retry-connrefused --retry-max-time 60 --connect-timeout 20 https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-install",
+ "chmod +x flatcar-install",
+ "./flatcar-install -s -i /root/ignition.json -C ${var.release_channel}",
+ "shutdown -r +1",
+ ]
+ }
+
+ provisioner "remote-exec" {
+ connection {
+ host = self.ipv4_address
+ private_key = tls_private_key.provisioning.private_key_pem
+ timeout = "3m"
+ user = "core"
+ }
+
+ inline = [
+ "sudo hostnamectl set-hostname ${self.name}",
+ ]
+ }
+}
+
+data "ct_config" "machine-ignitions" {
+ for_each = toset(var.machines)
+ strict = true
+ content = file("${path.module}/server-configs/${each.key}.yaml")
+ snippets = [
+ data.template_file.core_user.rendered
+ ]
+}
+
+data "template_file" "core_user" {
+ template = file("${path.module}/core-user.yaml.tmpl")
+ vars = {
+ ssh_keys = jsonencode(concat(var.ssh_keys, [tls_private_key.provisioning.public_key_openssh]))
+ }
+}
+```
+
+Create a `variables.tf` file that declares the variables used above:
+
+```
+variable "machines" {
+ type = list(string)
+ description = "Machine names, corresponding to machine-NAME.yaml.tmpl files"
+}
+
+variable "cluster_name" {
+ type = string
+ description = "Cluster name used as prefix for the machine names"
+}
+
+variable "ssh_keys" {
+ type = list(string)
+ default = []
+ description = "Additional SSH public keys for user 'core'."
+}
+
+variable "server_type" {
+ type = string
+ default = "cx11"
+ description = "The server type to rent."
+}
+
+variable "location" {
+ type = string
+ default = "fsn1"
+ description = "The Hetzner region code for the region to deploy to."
+}
+
+variable "release_channel" {
+ type = string
+ description = "Release channel"
+ default = "stable"
+
+ validation {
+ condition = contains(["lts", "stable", "beta", "alpha"], var.release_channel)
+ error_message = "release_channel must be lts, stable, beta, or alpha."
+ }
+}
+```
+
+An `outputs.tf` file for showing the nodes' IP addresses, ids, and names - as well as the SSH key generated for the deployment:
+
+```
+output "provisioning_public_key_file" {
+ value = local_file.provisioning_key_pub.filename
+}
+
+output "provisioning_private_key_file" {
+ value = local_file.provisioning_key.filename
+}
+
+output "ipv4" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => hcloud_server.machine[key].ipv4_address
+ }
+}
+
+output "ipv6" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => hcloud_server.machine[key].ipv6_address
+ }
+}
+
+output "id" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => hcloud_server.machine[key].id
+ }
+}
+
+output "name" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => hcloud_server.machine[key].name
+ }
+}
+```
+
+Define a user for logging in to the node(s) in a file `core-user.yaml.tmpl`:
+
+```
+variant: flatcar
+version: 1.0.0
+
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys: ${ssh_keys}
+```
+
+Lastly, define a file `versions.tf` and set desired terraform and provider versions there:
+
+```
+terraform {
+ required_version = ">= 0.14"
+ required_providers {
+ hcloud = {
+ source = "hetznercloud/hcloud"
+ version = "1.38.2"
+ }
+ ct = {
+ source = "poseidon/ct"
+ version = "0.11.0"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = "~> 2.2.0"
+ }
+ null = {
+ source = "hashicorp/null"
+ version = "~> 3.2.1"
+ }
+ }
+}
+```
+
+Done!
+
+Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
+Define your cluster in a file `terraform.tfvars`:
+
+```
+# Server names are [cluster]-[machine #1], [cluster]-[machine #2] ... etc.
+cluster_name = "flatcar"
+
+# Uses server-configs/server1.yaml
+machines = ["server1"]
+
+# One of nbg1, fsn1, hel1, or ash
+location = "fsn1"
+
+# Smallest instance size
+server_type = "cx11"
+
+# Additional SSH "authorized hosts" keys for the "core" user.
+# ssh_keys = [ "...", "..." ]
+
+# One of "lts", "stable", "beta", or "alpha"
+release_channel = "stable"
+```
+
+The above references a deployment configuration in [Butane](../../../provisioning/config-transpiler/configuration/) syntax; `server-configs/server1.yaml`.
+This is used to set up containers on your node, e.g. for a simple service, or to kick off bootstrapping a complex control plane like Kubernetes.
+
+The example below will run a simple web server on the node. Create a file `server-configs/server1.yaml` with the following contents:
+
+```
+variant: flatcar
+version: 1.0.0
+
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+
+Finally, run Terraform v0.13 as follows to create the machine:
+
+```
+export HCLOUD_TOKEN=...
+terraform init
+terraform apply
+```
+
+Terraform will print server information (name, ipv4 and v6, and ID) after the deployment concluded. The deployment will create an SSH key pair in `.ssh/`.
+
+You can now log in via `ssh -i ./.ssh/provisioning_private_key.pem -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@[SERVER-IP]`.
+
+When you make a change to `terraform.tfvars` (e.g. to add more nodes) and/or to `server-configs/server1.yaml` for updating your deployment, make sure to run `terraform apply` again.
+NOTE that changes in existing server configurations (like `server-configs/server1.yaml`) will replace the existing machine.
+
+As mentined in the beginning, you can find this Terraform module in the repository for [Flatcar Terraform examples](https://github.com/flatcar/flatcar-terraform/tree/main/hetzner).
diff --git a/content/docs/latest/installing/cloud/openstack.md b/content/docs/latest/installing/cloud/openstack.md
new file mode 100644
index 00000000..5f29a2fb
--- /dev/null
+++ b/content/docs/latest/installing/cloud/openstack.md
@@ -0,0 +1,205 @@
+---
+title: Running Flatcar Container Linux on OpenStack
+linktitle: Running on OpenStack
+weight: 10
+aliases:
+ - ../../os/booting-on-openstack
+ - ../../cloud-providers/booting-on-openstack
+---
+
+These instructions will walk you through downloading Flatcar Container Linux for OpenStack, importing it with the `glance` tool, and running your first cluster with the `nova` tool.
+
+## Import the image
+
+These steps will download the Flatcar Container Linux image, uncompress it, and then import it into the glance image store.
+
+## Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+$ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_openstack_image.img.bz2
+$ bunzip2 flatcar_production_openstack_image.img.bz2
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+$ wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_openstack_image.img.bz2
+$ bunzip2 flatcar_production_openstack_image.img.bz2
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+$ wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_openstack_image.img.bz2
+$ bunzip2 flatcar_production_openstack_image.img.bz2
+
+
+
+
+
+Once the download completes, add the Flatcar Container Linux image into Glance:
+
+```shell
+$ glance image-create --name Container-Linux \
+ --container-format bare \
+ --disk-format qcow2 \
+ --file flatcar_production_openstack_image.img
++------------------+--------------------------------------+
+| Property | Value |
++------------------+--------------------------------------+
+| checksum | 4742f3c30bd2dcbaf3990ac338bd8e8c |
+| container_format | ovf |
+| created_at | 2013-08-29T22:21:22 |
+| deleted | False |
+| deleted_at | None |
+| disk_format | qcow2 |
+| id | cdf3874c-c27f-4816-bc8c-046b240e0edd |
+| is_public | False |
+| min_disk | 0 |
+| min_ram | 0 |
+| name | flatcar |
+| owner | 8e662c811b184482adaa34c89a9c33ae |
+| protected | False |
+| size | 363660800 |
+| status | active |
+| updated_at | 2013-08-29T22:22:04 |
++------------------+--------------------------------------+
+```
+
+Optionally add the `--visibility public` flag to make this image available outside of the configured OpenStack account tenant.
+
+## Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, launch systemd units on startup and more via Butane Configs. These configs are then transpiled into Ignition JSON configs and given to booting machines. Jump over to the [docs to learn about the supported features][butane-configs]. We're going to provide our Butane Config to OpenStack via the user-data flag. Our Butane Config will also contain SSH keys that will be used to connect to the instance. In order for this to work your OpenStack cloud provider must support [config drive][config-drive] or the OpenStack metadata service.
+
+[config-drive]: http://docs.openstack.org/user-guide/cli_config_drive.html
+
+As an example, this Butane YAML config will start an NGINX Docker container:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa ABCD...
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:release > ignition.json
+```
+
+The `coreos-metadata.service` saves metadata variables to `/run/metadata/flatcar`. Systemd units can use them with `EnvironmentFile=/run/metadata/flatcar` in the `[Service]` section when setting `Requires=coreos-metadata.service` and `After=coreos-metadata.service` in the `[Unit]` section.
+Unfortunately systems relying on config drive are currently unsupported.
+
+## Launch cluster
+
+Boot the machines with the `nova` CLI, referencing the image ID from the import step above and your [JSON file from ct][cl-configs]:
+
+```shell
+nova boot \
+--user-data ./config.ign \
+--image cdf3874c-c27f-4816-bc8c-046b240e0edd \
+--key-name flatcar \
+--flavor m1.medium \
+--min-count 3 \
+--security-groups default,flatcar
+```
+
+To use config drive you may need to add `--config-drive=true` to command above.
+
+If you have more than one network, you may have to be explicit in the nova boot command.
+
+```shell
+--nic net-id=5b9c5ef6-28b9-4781-ac18-d7d86765fd38
+```
+
+You can see the IDs for your configured networks by running
+
+```shell
+nova network-list
++--------------------------------------+---------+------+
+| ID | Label | Cidr |
++--------------------------------------+---------+------+
+| f54b48c7-34fc-4828-8ee9-21b623c7b8f9 | public | - |
+| 5b9c5ef6-28b9-4781-ac18-d7d86765fd38 | private | - |
++--------------------------------------+---------+------+
+```
+
+Your first Flatcar Container Linux cluster should now be running. The only thing left to do is find an IP and SSH in.
+
+```shell
+$ nova list
++--------------------------------------+-----------------+--------+------------+-------------+--------------------+
+| ID | Name | Status | Task State | Power State | Networks |
++--------------------------------------+-----------------+--------+------------+-------------+--------------------+
+| a1df1d98-622f-4f3b-adef-cb32f3e2a94d | flatcar-a1df1d98 | ACTIVE | None | Running | private=10.0.0.3 |
+| db13c6a7-a474-40ff-906e-2447cbf89440 | flatcar-db13c6a7 | ACTIVE | None | Running | private=10.0.0.4 |
+| f70b739d-9ad8-4b0b-bb74-4d715205ff0b | flatcar-f70b739d | ACTIVE | None | Running | private=10.0.0.5 |
++--------------------------------------+-----------------+--------+------------+-------------+--------------------+
+```
+
+Finally SSH into an instance, note that the user is `core`:
+
+```shell
+$ chmod 400 core.pem
+$ ssh -i core.pem core@10.0.0.3
+core@10-0-0-3 ~ $
+```
+
+## Adding more machines
+
+Adding new instances to the cluster is as easy as launching more with the same Butane Config. New instances will join the cluster assuming they can communicate with the others.
+
+Example:
+
+```shell
+nova boot \
+--user-data ./config.ign \
+--image cdf3874c-c27f-4816-bc8c-046b240e0edd \
+--key-name flatcar \
+--flavor m1.medium \
+--security-groups default,flatcar
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/installing/cloud/using-google-cloud-launcher.md b/content/docs/latest/installing/cloud/using-google-cloud-launcher.md
new file mode 100644
index 00000000..249f69bf
--- /dev/null
+++ b/content/docs/latest/installing/cloud/using-google-cloud-launcher.md
@@ -0,0 +1,113 @@
+---
+title: Deploying Flatcar Container Linux using Google Cloud Launcher
+linktitle: Using Google Cloud Launcher
+description: >
+ How to use the Google Cloud Launcher Marketplace feature to launch
+ Flatcar Container Linux on GCP
+weight: 15
+aliases:
+ - ../../os/using-google-cloud-launcher
+ - ../../cloud-providers/using-google-cloud-launcher
+---
+
+You can deploy Flatcar Container Linux instances in a really easy way,
+using the Google Cloud Launcher. Before proceeding, you will need a GCE
+account ([GCE free trial][free-trial]).
+
+[free-trial]: https://cloud.google.com/free-trial/?utm_source=flatcar&utm_medium=partners&utm_campaign=partner-free-trial
+
+To start the deployment, go to
+
+![GCL landing page](../../img/gcl-landingpage.png)
+Click "Launch".
+
+This will bring up a page where you can choose the parameters for your
+Flatcar Container Linux instance:
+![GCL launcher config](../../img/gcl-launcherconfig.png)
+You can use the default values already filled in for you, or customize them
+for your needs. When you're happy with the settings, click "Deploy"
+
+This will start deploying your instance, showing you the progress as the
+resources get assigned.
+![GCL deploying](../../img/gcl-deploying.png)
+And that's it! Your new Flatcar Container Linux is deploying.
+
+## Inspecting your instance
+
+When complete you should see:
+![GCL deployed](../../img/gcl-deployed.png)
+
+Flatcar supports automatic resizing on first boot, the installation will
+use all the available space. So, you can ignore the warning about the image
+and disk mismatch.
+
+## SSH and users
+
+Users are added to Container Linux on GCE by any user provided
+configuration (like ignition or cloudinit) and by either the GCE account
+manager or [GCE OS
+Login](https://cloud.google.com/compute/docs/instances/managing-instance-access).
+OS Login is used if it is enabled for the instance, otherwise the GCE
+account manager is used.
+
+By default, the GCE account manager will provision the machine for the
+username that matches your account.
+
+### Using the web UI
+
+The easiest way to launch an SSH client is directly from the web UI
+![GCL ssh](../../img/gcl-ssh.png)
+
+This will connect with your user, which has some basic permissions. You
+will be able to inspect the machine and have a look around.
+
+To connect with the `core` user that can administer the whole machine, you
+will need to connect using the `gcloud` command.
+
+### Using OS Login
+
+You can log in using your Google account on instances with OS Login
+enabled. OS Login needs to be [enabled in the GCE
+console](https://cloud.google.com/compute/docs/instances/managing-instance-access#enable_oslogin)
+and on the instance. **It is enabled by default on instances provisioned with
+Flatcar Container Linux**. Once enabled, you can log into your Container Linux
+instances using:
+
+```shell
+gcloud compute ssh --zone us-central1-a
+```
+
+This will use your GCE user to log in.
+
+### Using the GCE account manager
+
+You can log in your Flatcar Container Linux instances from the command
+line, using the `gcloud` command.
+
+```shell
+gcloud compute ssh --zone core@
+```
+
+Users other than `core`, which are set up by the GCE account manager, may
+not be a member of required groups. If you have issues, try running
+commands such as `journalctl` with sudo.
+
+#### Disabling OS Login on newly provisioned nodes
+
+You can disable the OS Login functionality by masking the `oem-gce-enable-oslogin.service` unit:
+
+```yaml
+systemd:
+ units:
+ - name: oem-gce-enable-oslogin.service
+ mask: true
+```
+
+When disabling OS Login functionality on the instance, it is also recommended to disable it in the GCE console.
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[quickstart]: ../
+[doc-index]: ../../
diff --git a/content/docs/latest/installing/cloud/vmware.md b/content/docs/latest/installing/cloud/vmware.md
new file mode 100644
index 00000000..dd40b92d
--- /dev/null
+++ b/content/docs/latest/installing/cloud/vmware.md
@@ -0,0 +1,365 @@
+---
+title: Running Flatcar Container Linux on VMware
+linktitle: Running on VMware
+weight: 10
+aliases:
+ - ../../os/booting-on-vmware
+ - ../../cloud-providers/booting-on-vmware
+---
+
+These instructions walk through running Flatcar Container Linux on VMware Fusion or ESXi. If you are familiar with another VMware product, you can use these instructions as a starting point.
+
+## Running the VM
+
+### Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+
curl -LO https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+
curl -LO https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+
curl -LO https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vmware_ova.ova
+
+
+
+
+### Booting with VMware vSphere/ESXi from the web interface
+
+Use the vSphere Client/ESXi web interface to deploy the VM as follows:
+
+1. In the menu, click `File` > `Deploy OVF Template...`
+2. In the wizard, specify the location of the OVA file downloaded earlier
+3. Name your VM
+4. Choose "thin provision" for the disk format
+5. Choose your network settings and [specify provisioning userdata][guestinfo]
+6. Confirm the settings, then click "Finish"
+
+Uncheck `Power on after deployment` in order to edit the VM before booting it the first time.
+
+The last step uploads the files to the ESXi datastore and registers the new VM. You can now tweak VM settings, then power it on.
+
+### Booting with VMware vSphere/ESXi from the command line with ovftool
+
+Use the [`ovftool`][ovftool] to deploy from the command line as follows:
+
+```shell
+ovftool --name=testvm --skipManifestCheck --noSSLVerify --datastore=datastore1 --powerOn=True --net:"VM Network=VM Network" --X:waitForIp --overwrite --powerOffTarget --X:guest:ignition.config.data=$(cat ignition_config.json | base64 --wrap=0) --X:guest:ignition.config.data.encoding=base64 ./flatcar_production_vmware_ova.ova 'vi:///:@'
+```
+
+This assumes that you downloaded `flatcar_production_vmware_ova.ova` to your current folder, and that you want to specify an Ignition config as userdata from `ignition_config.json`.
+
+*NB: These instructions were tested with an ESXi v5.5 host.*
+
+### Booting with VMware Workstation 12 or VMware Fusion
+
+Run VMware Workstation GUI:
+
+1. In the menu, click `File` > `Open...`
+2. In the wizard, specify the location of the OVA template downloaded earlier
+3. Name your VM, then click `Import`
+4. (Press `Retry` *if* VMware Workstation raises an "OVF specification" warning)
+5. Edit VM settings if necessary and [specify provisioning userdata][guestinfo]
+6. Start your Flatcar Container Linux VM
+
+*NB: These instructions were tested with a Fusion 8.1 host.*
+
+### Installing via PXE or ISO image
+
+Flatcar Container Linux can also be installed by booting the virtual machine via [PXE][PXE] or the [ISO image][ISO] and then [installing Flatcar Container Linux to disk][install].
+
+## Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][transpiler].
+
+You can provide a raw Ignition config to Flatcar Container Linux via VMware's [Guestinfo interface][guestinfo].
+
+As an example, this Butane Config will start an NGINX Docker container and configure private and public static IP addresses:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/network/00-vmware.network
+ contents:
+ inline: |
+ [Match]
+ Name=ens192
+ [Network]
+ DHCP=no
+ DNS=1.1.1.1
+ DNS=1.0.0.1
+ [Address]
+ Address=123.45.67.2/29
+ [Address]
+ Address=10.0.0.2/29
+ [Route]
+ Destination=0.0.0.0/0
+ Gateway=123.45.67.1
+ [Route]
+ Destination=10.0.0.0/8
+ Gateway=10.0.0.1
+systemd:
+ units:
+ - name: nginx.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=NGINX example
+ After=docker.service
+ Requires=docker.service
+ [Service]
+ TimeoutStartSec=0
+ ExecStartPre=-/usr/bin/docker rm --force nginx1
+ ExecStart=/usr/bin/docker run --name nginx1 --pull always --log-driver=journald --net host docker.io/nginx:1
+ ExecStop=/usr/bin/docker stop nginx1
+ Restart=always
+ RestartSec=5s
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Transpile it to Ignition JSON:
+
+```shell
+cat cl.yaml | docker run --rm -i quay.io/coreos/butane:release > ignition.json
+```
+
+For DHCP you don't need to specify any networkd units.
+
+After transpilation, the resulting JSON content can be used in `guestinfo.ignition.config.data` after encoding it to base64 and setting `guestinfo.ignition.config.data.encoding` to `base64`.
+If DHCP is used, the JSON file can also be uploaded to a web server and fetched by Ignition if the HTTP(s) URL is given in `guestinfo.ignition.config.url`.
+
+Beginning with Flatcar major version 3248, fetching remote resources in Ignition or with torcx is not only supported with DHCP but also by using `guestinfo.afterburn.initrd.network-kargs` to define a custom network configuration, here an [example for a static IP address](https://coreos.github.io/afterburn/usage/initrd-network-cmdline/#vmware).
+
+IP configuration specified via `guestinfo.interface.*` and `guestinfo.dns.*` variables is currently not supported with Ignition and will only work if you provide coreos-cloudinit data (cloud-config or a script) as userdata.
+
+### Templating with Butane Configs and setting up metadata
+
+On many cloud providers Ignition will run the [`coreos-metadata.service`](../../provisioning/ignition/metadata/#metadataconf) (which runs `afterburn`) to set up [node metadata](../../provisioning/config-transpiler/dynamic-data). This is not the case with VMware because the network setup is defined by you and nothing generic that `afterburn` would know about.
+
+Here's a Butane configuration example to setup an `etcd` instance with a custom `coreos-metadata.service`:
+
+```yaml
+version: 1.0.0
+variant: flatcar
+systemd:
+ units:
+ - name: etcd-member.service
+ enabled: true
+ dropins:
+ - name: 20-clct-etcd-member.conf
+ contents: |
+ [Unit]
+ Requires=coreos-metadata.service
+ After=coreos-metadata.service
+ [Service]
+ EnvironmentFile=/run/metadata/coreos
+ ExecStart=
+ ExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS --advertise-client-urls="http://${COREOS_CUSTOM_PUBLIC_IPV4}:2379"
+ - name: coreos-metadata.service
+ contents: |
+ [Unit]
+ Description=VMware metadata agent
+ After=nss-lookup.target
+ After=network-online.target
+ Wants=network-online.target
+ [Service]
+ Type=oneshot
+ Restart=on-failure
+ RemainAfterExit=yes
+ Environment=OUTPUT=/run/metadata/coreos
+ ExecStart=/usr/bin/mkdir --parent /run/metadata
+ ExecStart=/usr/bin/bash -c 'echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+")\nCOREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}'
+```
+This populates `/run/metadata/coreos` with variables for a public IP address on interface `ens192` (taking the one that is not starting with `10.…`) and a private IP address on the same interface (taking the one that is starting with `10.…`). You need to adjust this to your network setup. In case you use the `guestinfo.interface.*` variables you could use `/usr/share/oem/bin/vmware-rpctool 'info-get guestinfo.interface.0.ip.0.address'` instead of `ip addr show … | grep …`.
+
+## Using coreos-cloudinit Cloud-Configs
+
+Ignition is the preferred way of provisioning because it runs in the initramfs and only at first boot.
+Cloud-Configs are supported, too, but coreos-cloudinit is not actively developed at the moment.
+
+Both Cloud-Config YAML content and raw bash scripts are supported by coreos-cloudinit. You can provide them to Flatcar Container Linux via VMware's [Guestinfo interface][guestinfo].
+
+For `$public_ipv4` and `$private_ipv4` substitutions to work you either need to use static IPs (through `guestinfo.interface.*` as described below) or you need to write the variables `COREOS_PUBLIC_IPV4` and `COREOS_PRIVATE_IPV4` to `/etc/environment` before coreos-cloudinit runs which would require a reboot. Thus, it may be easier to use the `coreos-metadata.service` approach and write these variables to `/run/metadata/coreos`. To do so, set `EnvironmentFile=/run/metadata/coreos`, `Requires=coreos-metadata.service`, and `After=coreos-metadata.service` in your systemd unit.
+
+Besides applying the config itself `coreos-cloudinit` supports the `guestinfo.interface.*` variables and will generate a networkd unit from them stored in `/run/systemd/network/`.
+
+The guestinfo variables known to coreos-cloudinit are (taken from [here](https://github.com/flatcar/coreos-cloudinit/blob/flatcar-master/Documentation/vmware-guestinfo.md#cloud-config-vmware-guestinfo-variables)), with ``, ``, `` being numbers starting from 0:
+
+* `guestinfo.hostname` used for `hostnamectl set-hostname`
+* `guestinfo.interface..name` used in the `[Match]` section of the networkd unit (can include wildcards)
+* `guestinfo.interface..mac` used in the `[Match]` section of the networkd unit
+* `guestinfo.interface..dhcp` is either `yes` or `no` and used in the `[Network]` section of the networkd unit
+* `guestinfo.interface..role` (required to generate a networkd unit for ``) is either `public` or `private` and used for Cloud-Config variable substitions (`$public_ipv4` etc) instead of `COREOS_PUBLIC_IPV4` from `/etc/environment`
+* `guestinfo.interface..ip..address` is a static IP address with subnet, e.g., `123.4.5.6/29`, used in the `[Address]` section of the networkd unit
+* `guestinfo.interface..route..gateway` used in the `[Route]` section of the networkd unit
+* `guestinfo.interface..route..destination` is a IP CIDR, e.g., `0.0.0.0/0`, used in the `[Route]` section of the networkd unit
+* `guestinfo.dns.server.` used in the `[Network]` section of any networkd unit
+* `guestinfo.dns.domain.` used in the `[Network]` section of any networkd unit
+* `guestinfo.(ignition|coreos).config.data`, `guestinfo.(ignition|coreos).config.data.encoding`, and `guestinfo.(ignition|coreos).config.url` as described in the surrounding sections
+
+If you rely on `$public_ipv4` and `$private_ipv4` substitutions through `guestinfo.interface..role` but have both IP addresses in one interface you may either use variables in `/run/metadata/coreos` as written in the previous section or you could provide the second IP address again on a dummy interface with a name that never matches a real interface, just to propagate the IP address to the coreos-cloudinit metadata.
+
+## VMware Guestinfo interface
+
+### Setting Guestinfo options
+
+The VMware guestinfo interface is a mechanism for VM configuration. Guestinfo properties are stored in the VMX file, or in the VMX representation in host memory. These properties are available to the VM at boot time. Within the VMX, the names of these properties are prefixed with `guestinfo.`. Guestinfo settings can be injected into VMs in one of four ways:
+
+* Configure guestinfo in the OVF for deployment. Software like [vcloud director][vcloud director] manipulates OVF descriptors for guest configuration. For details, check out this VMware blog post about [Self-Configuration and the OVF Environment][ovf-selfconfig].
+
+* The ESXi web UI and VMware Workstation Player either directly display the OVF guestinfo variables for editing or allow to add them as parameters in the VM settings before deployment. They can also be changed and added later in the VM settings (but for Ignition configs that requires `touch /boot/flatcar/first_boot` so that Ignition runs again on the next boot).
+
+* The [`ovftool`][ovftool] supports guestinfo variables with `--X:guest:VARIABLE=value`.
+
+* Set guestinfo keys and values from the Flatcar Container Linux guest itself, by using a VMware Tools command like:
+
+```shell
+/usr/share/oem/bin/vmtoolsd --cmd "info-set guestinfo. "
+```
+
+* Guestinfo keys and values can be set from a VMware Service Console, using the `setguestinfo` subcommand:
+
+```shell
+vmware-cmd /vmfs/volumes/[...]//.vmx setguestinfo guestinfo.
+```
+
+* You can manually modify the VMX and reload it on the VMware Workstation, ESXi host, or in vCenter.
+
+Guestinfo configuration set via the VMware API or with `vmtoolsd` from within the Flatcar Container Linux guest itself are stored in VM process memory and are lost on VM shutdown or reboot.
+
+### Defining the Ignition config or coreos-cloudinit Cloud-Config in Guestinfo
+
+If either the `guestinfo.ignition.config.data` or the `guestinfo.ignition.config.url` userdata property contains an Ignition config, Ignition will apply the referenced config on first boot during the initramfs phase. If it contains a Cloud-Config or script, Ignition will enable a service for coreos-cloudinit that will run on every boot and apply the config.
+
+The userdata is prepared for the guestinfo facility in one of two encoding types, specified in the `guestinfo.ignition.config.data.encoding` variable:
+
+| Encoding | Command |
+|:---------------|:------------------------------------------------------|
+| <elided> | `sed -e 's/%/%%/g' -e 's/"/%22/g' /path/to/user_data` |
+| base64 | `base64 -w0 /path/to/user_data` |
+| gz+base64 | `gzip -c -9 /path/to/user_data \| base64 -w0` |
+
+base64 (or gz+base64) encoding is mandatory for ESXi, passing unencoded Ignition data will lead to Ignition failures during boot due to lack of escaping in the guestinfo XML data.
+
+#### Example
+
+```ini
+guestinfo.ignition.config.data = "ewogICJpZ25pdGlvbiI6IHsgInZlcnNpb24iOiAiMi4wLjAiIH0KfQo="
+guestinfo.ignition.config.data.encoding = "base64"
+```
+
+This example will be decoded into the following Ignition config, but a Cloud-Config can be specified the same way in the variable:
+
+```json
+{
+ "ignition": { "version": "2.0.0" }
+}
+```
+
+Instead of providing the userdata inline, you can also specify a remote HTTP location in `guestinfo.ignition.config.url`.
+Both Ignition and coreos-cloudinit support it but Ignition relies on DHCP in the initramfs which means that it can't fetch remote resources if you have to use static IPs.
+
+## Logging in
+
+The VGA console has autologin enabled.
+
+Networking can take some time to start under VMware. Once it does, you will see the IP when typing `ip a` or in the VM info that VMware displays.
+
+You can login to the host at that IP using your SSH key, or the password set in your cloud-config:
+
+```shell
+ssh core@YOURIP
+```
+
+## Disabling/enabling autologin
+
+Beginning with Flatcar major version 3185 the `kernelArguments` directive in Ignition v3 allows to add/remove the `flatcar.autologin` kernel command line parameter that is set in `grub.cfg`.
+The following short Butane YAML config (to be transpiled to Ignition v3 JSON) ensures that the `flatcar.autologin` kernel parameter gets removed and then as part of the first boot it gets applied to an instant reboot before the instance comes up:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+kernel_arguments:
+ should_not_exist:
+ - flatcar.autologin
+```
+
+With `should_exist` instead of `should_not_exist` the argument would be added if it isn't set in `grub.cfg` already.
+
+Read more about setting kernel command line parameters this way [here](../../../setup/customization/other-settings/#adding-custom-kernel-boot-options).
+
+In case you want to disable the autologin on the console with Ignition v2 where no `kernelArguments` directive exists, you can use the following directive in your Container Linux Config YAML.
+To take effect it requires an additional reboot.
+
+```yaml
+storage:
+ filesystems:
+ - name: oem
+ mount:
+ device: /dev/disk/by-label/OEM
+ format: btrfs
+ label: OEM
+ files:
+ - path: /grub.cfg
+ filesystem: oem
+ mode: 0644
+ contents:
+ inline: |
+ set oem_id="vmware"
+ set linux_append=""
+```
+
+To take effect directly on first boot, the alternative is to create a `getty@.service` drop-in, here a CLC snippet:
+
+```
+systemd:
+ units:
+ - name: getty@.service
+ dropins:
+ - name: 10-autologin.conf
+ contents: |
+ [Service]
+ ExecStart=
+ ExecStart=-/sbin/agetty --noclear %I $TERM
+```
+
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted, it's time to explore. Check out the [Flatcar Container Linux Quickstart][quickstart] guide, or dig into [more specific topics][docs].
+
+[cl-configs]: ../../provisioning/cl-config
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[docs]: ../../
+[PXE]: ../bare-metal/booting-with-pxe
+[ISO]: ../bare-metal/booting-with-iso
+[install]: ../bare-metal/installing-to-disk
+[vcloud director]: http://blogs.vmware.com/vsphere/2012/06/leveraging-vapp-vm-custom-properties-in-vcloud-director.html
+[ovf-selfconfig]: http://blogs.vmware.com/vapp/2009/07/selfconfiguration-and-the-ovf-environment.html
+[guestinfo]: #defining-the-ignition-config-or-coreos-cloudinit-cloud-config-in-guestinfo
+[transpiler]: ../../provisioning/config-transpiler/
+[ovftool]: https://www.vmware.com/support/developer/ovf/
diff --git a/content/docs/latest/installing/community-platforms/_index.md b/content/docs/latest/installing/community-platforms/_index.md
new file mode 100644
index 00000000..8aa6b719
--- /dev/null
+++ b/content/docs/latest/installing/community-platforms/_index.md
@@ -0,0 +1,33 @@
+---
+title: Community supported platforms
+weight: 40
+aliases:
+ - ../os/community-platforms
+ - ../community-platforms
+---
+
+The Flatcar Container Linux community has provided support for Flatcar Container Linux on a number of platforms beyond those [officially supported][official-support] (i.e, fully covered in the automated tests) by Kinvolk.
+
+
+
+The platforms and providers listed below each provide support and documentation for running Flatcar Container Linux:
+
+## Cloud providers
+
+* [Exoscale][exoscale]
+* [Rackspace Cloud][rackspace]
+* [Vultr VPS][vultr]
+
+## Other providers
+
+* [Eucalyptus][eucalyptus]
+* [Vagrant][vagrant]
+* [VirtualBox][virtualbox]
+
+[exoscale]: exoscale
+[rackspace]: rackspace
+[vultr]: vultr
+[eucalyptus]: eucalyptus
+[vagrant]: ../vms/vagrant
+[virtualbox]: ../vms/virtualbox
+[official-support]: ../../
diff --git a/content/docs/latest/installing/community-platforms/eucalyptus.md b/content/docs/latest/installing/community-platforms/eucalyptus.md
new file mode 100644
index 00000000..3c29ac28
--- /dev/null
+++ b/content/docs/latest/installing/community-platforms/eucalyptus.md
@@ -0,0 +1,111 @@
+---
+title: Running Flatcar Container Linux on Eucalyptus 3.4
+linktitle: Running on Eucalyptus 3.4
+weight: 10
+aliases:
+ - ../../os/booting-on-eucalyptus
+ - ../../community-platforms/booting-on-eucalyptus
+---
+
+These instructions will walk you through downloading Flatcar Container Linux, bundling the image, and running an instance from it.
+
+## Import the image
+
+These steps will download the Flatcar Container Linux image, uncompress it, convert it from qcow to raw, and then import it into Eucalyptus. In order to convert the image you will need to install `qemu-img` with your favorite package manager.
+
+### Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+$ wget -q https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_openstack_image.img.bz2
+$ bunzip2 flatcar_production_openstack_image.img.bz2
+$ qemu-img convert -O raw flatcar_production_openstack_image.img flatcar_production_openstack_image.raw
+$ euca-bundle-image -i flatcar_production_openstack_image.raw -r x86_64 -d /var/tmp
+00% |====================================================================================================| 5.33 GB 59.60 MB/s Time: 0:01:35
+Wrote manifest bundle/flatcar_production_openstack_image.raw.manifest.xml
+$ euca-upload-bundle -m /var/tmp/flatcar_production_openstack_image.raw.manifest.xml -b flatcar-production
+Uploaded flatcar-production/flatcar_production_openstack_image.raw.manifest.xml
+$ euca-register flatcar-production/flatcar_production_openstack_image.raw.manifest.xml --virtualization-type hvm --name "Flatcar Container Linux-Production"
+emi-E4A33D45
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+$ wget -q https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_openstack_image.img.bz2
+$ bunzip2 flatcar_production_openstack_image.img.bz2
+$ qemu-img convert -O raw flatcar_production_openstack_image.img flatcar_production_openstack_image.raw
+$ euca-bundle-image -i flatcar_production_openstack_image.raw -r x86_64 -d /var/tmp
+00% |====================================================================================================| 5.33 GB 59.60 MB/s Time: 0:01:35
+Wrote manifest bundle/flatcar_production_openstack_image.raw.manifest.xml
+$ euca-upload-bundle -m /var/tmp/flatcar_production_openstack_image.raw.manifest.xml -b flatcar-production
+Uploaded flatcar-production/flatcar_production_openstack_image.raw.manifest.xml
+$ euca-register flatcar-production/flatcar_production_openstack_image.raw.manifest.xml --virtualization-type hvm --name "Flatcar Container Linux-Production"
+emi-E4A33D45
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+$ wget -q https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_openstack_image.img.bz2
+$ bunzip2 flatcar_production_openstack_image.img.bz2
+$ qemu-img convert -O raw flatcar_production_openstack_image.img flatcar_production_openstack_image.raw
+$ euca-bundle-image -i flatcar_production_openstack_image.raw -r x86_64 -d /var/tmp
+00% |====================================================================================================| 5.33 GB 59.60 MB/s Time: 0:01:35
+Wrote manifest bundle/flatcar_production_openstack_image.raw.manifest.xml
+$ euca-upload-bundle -m /var/tmp/flatcar_production_openstack_image.raw.manifest.xml -b flatcar-production
+Uploaded flatcar-production/flatcar_production_openstack_image.raw.manifest.xml
+$ euca-register flatcar-production/flatcar_production_openstack_image.raw.manifest.xml --virtualization-type hvm --name "Flatcar Container Linux-Production"
+emi-E4A33D45
+
+
+
+
+
+## Boot it up
+
+Now generate the ssh key that will be injected into the image for the `core` user and boot it up!
+
+```sh
+$ euca-create-keypair flatcar > core.pem
+$ euca-run-instances emi-E4A33D45 -k flatcar -t m1.medium -g default
+...
+```
+
+Your first Flatcar Container Linux instance should now be running. The only thing left to do is find the IP and SSH in.
+
+```shell
+$ euca-describe-instances | grep flatcar
+RESERVATION r-BCF44206 498025213678 group-1380012085
+INSTANCE i-22444094 emi-E4A33D45 euca-10-0-1-61.cloud.home euca-172-16-0-56.cloud.internal running flatcar 0
+ m1.small 2013-10-02T05:32:44.096Z one eki-05573B4A eri-EA7436D2 monitoring-enabled 10.0.1.61 172.16.0.56 instance-store paravirtualized 5046c208-fec1-4a6e-b079-e7cdf6a7db8f_one_1
+
+```
+
+Finally SSH into it, note that the user is `core`:
+
+```shell
+$ chmod 400 core.pem
+$ ssh -i core.pem core@10.0.1.61
+core@10-0-0-3 ~ $
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+
diff --git a/content/docs/latest/installing/community-platforms/exoscale.md b/content/docs/latest/installing/community-platforms/exoscale.md
new file mode 100644
index 00000000..b0f0a441
--- /dev/null
+++ b/content/docs/latest/installing/community-platforms/exoscale.md
@@ -0,0 +1,150 @@
+---
+title: Running Flatcar Container Linux on Exoscale
+linktitle: Running on Exoscale
+weight: 10
+aliases:
+ - ../../os/booting-on-exoscale
+ - ../../community-platforms/booting-on-exoscale
+---
+
+## Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][reboot-docs], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+The Exoscale Flatcar Container Linux image is built officially and each instance deployment is a unique fresh instance. By default, only the stable channel is deployed on Exoscale, you can easily [switch to Beta or Alpha channel][switching-channels].
+
+
+[reboot-docs]: ../../setup/releases/update-strategies
+[switching-channels]: ../../setup/releases/switching-channels
+[release-notes]: https://flatcar-linux.org/releases
+[cloud-config-docs]: https://github.com/flatcar/coreos-cloudinit/blob/master/Documentation/cloud-config.md
+
+## Security groups
+
+Unlike other providers, all Exoscale instances are protected by default on inbound traffic. In order to be able to work in a Flatcar Container Linux cluster you should add the following rules in either your default security group or a security group of your choice and tag all Flatcar Container Linux instances with it:
+
+* SSH: TCP port 22
+* etcd: TCP ports 2379 for client communication and 2380 for server-to-server communication
+* etcd (Deprecated): TCP ports 4001 for client communication and 7001 for server-to-server communication
+
+
+## Cloud-config
+
+Flatcar Container Linux allows you to configure machine parameters, launch systemd units on startup, and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. Once the machine is created, cloud-config cannot be modified.
+
+You can provide raw cloud-config data to Flatcar Container Linux via the Exoscale portal or [via the Exoscale compute API](#via-the-api).
+
+In order to leverage Flatcar Container Linux unique automation attributes, a standard Flatcar Container Linux cloud-config on Exoscale could be configured with:
+
+```cloud-config
+#cloud-config
+
+flatcar:
+ etcd2:
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
+ # specify the initial size of your cluster with ?size=X
+ discovery: https://discovery.etcd.io/
+ advertise-client-urls: http://$public_ipv4:2379,http://$private_ipv4:4001
+ initial-advertise-peer-urls: http://$public_ipv4:2380
+ # listen on both the official ports and the legacy ports
+ # legacy ports can be omitted if your application doesn't depend on them
+ listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ listen-peer-urls: http://$public_ipv4:2380
+
+ units:
+ - name: etcd2.service
+ command: start
+```
+
+### Adding more machines
+
+To add more instances to the cluster, just launch more with the same cloud-config adjusting the FQDN or removing the statement. New instances will join the cluster regardless of location provided that security groups are correctly configured.
+
+### Modifying cloud-config
+
+It is possible to modify the cloud-config contents during the lifetime of an instance. In order to modify the contents, you need to use the API command `updateVirtualMachine` with the machine in a stopped state.
+
+```sh
+cs stopVirtualMachine id=
+cs updateVirtualMachine id= userData=
+cs startVirtualMachine id=
+```
+
+*note:* switch the request type from GET to POST if the userData payload is longer than 2KB.
+
+[API reference for updateVirtualMachine](https://community.exoscale.ch/compute/api/#updatevirtualmachine_GET)
+
+## SSH to your Flatcar Container Linux instances
+
+Flatcar Container Linux does not allow root connection to the instance. By default, it uses the `core` user instead of `root` and doesn't use a password for authentication. You'll need to add an SSH key(s) via the web console or add keys/passwords via your cloud-config in order to log in.
+
+To log in to a Flatcar Container Linux instance after it's created click on its IP address or run:
+
+```sh
+ssh core@
+```
+
+## Launching instances
+
+### Via the API
+
+Install and configure the command line client (Python required) with your [API details](https://portal.exoscale.ch/account/profile/api).
+
+```sh
+pip install cs
+vi $HOME/.cloudstack.ini
+[cloudstack]
+endpoint = https://api.exoscale.ch/compute
+key = api key
+secret = secret
+```
+
+To launch a Small 2GB instance with the current Stable Flatcar Container Linux image:
+
+note: template ids are available on the [Exoscale website](https://www.exoscale.ch/open-cloud/templates/).
+
+```sh
+cs deployVirtualMachine templateId=2a196b89-0c50-4400-9d42-ef43bcc0fa99 serviceOfferingId=21624abb-764e-4def-81d7-9fc54b5957fb zoneId=1128bd56-b4d9-4ac6-a7b9-c715b187ce11 keyPair=[keypair name]
+```
+
+Be sure to specify your SSH key to be able to access the machine. Management of SSH keys is detailed on the [SSH key page][exo-keys-docs]. For more details, check out [Exoscale's API documentation][exo-api-docs].
+
+[exo-api-docs]: https://community.exoscale.ch/compute/api/
+[exo-keys-docs]: https://community.exoscale.ch/compute/documentation/#SSH_keypairs
+
+### Via the web console
+
+1. Open the ["add new instance"](https://portal.exoscale.ch/compute/instances/add) page in the Exoscale web portal.
+2. Give the machine a hostname, and choose a zone.
+3. Choose the Flatcar Container Linux template
+
+
+
+
Choosing Exoscale template
+
+
+4. Choose the instance size
+
+
+
+
Choosing Exoscale instance size
+
+
+5. Select your SSH keys.
+6. Add your your optional cloud-config.
+
+
+
+
Exoscale cloud-config
+
+
+7. Create your instance
+
+Unlike other Exoscale images where the root password is randomly set at startup, Flatcar Container Linux does not have password logon activated. You will need to [configure your public key with Exoscale][exo-keys-docs] in order to login to the Flatcar Container Linux instances or to specify external keys using cloud-config.
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[quickstart]: ../
+[doc-index]: ../../
diff --git a/content/docs/latest/installing/community-platforms/notes-for-distributors.md b/content/docs/latest/installing/community-platforms/notes-for-distributors.md
new file mode 100644
index 00000000..a2cf3b90
--- /dev/null
+++ b/content/docs/latest/installing/community-platforms/notes-for-distributors.md
@@ -0,0 +1,65 @@
+---
+title: Notes for distributors
+weight: 10
+aliases:
+ - ../../os/notes-for-distributors
+ - ../../bare-metal/notes-for-distributors
+---
+
+## Importing images
+
+Images of Flatcar Container Linux alpha releases are hosted at [`https://alpha.release.flatcar-linux.net/amd64-usr/`][alpha-bucket]. There are directories for releases by version as well as `current` with a copy of the latest version. Similarly, beta releases can be found at [`https://beta.release.flatcar-linux.net/amd64-usr/`][beta-bucket], and stable releases at [`https://stable.release.flatcar-linux.net/amd64-usr/`][stable-bucket].
+
+Each directory has a `version.txt` file containing version information for the files in that directory. If you are importing images for use inside your environment it is recommended that you fetch `version.txt` from the `current` directory and use its contents to compute the path to the other artifacts. For example, to download the alpha OpenStack version of Flatcar Container Linux:
+
+1. Download `https://alpha.release.flatcar-linux.net/amd64-usr/current/version.txt`.
+2. Parse `version.txt` to obtain the value of `COREOS_VERSION_ID`, for example `1576.1.0`.
+3. Download `https://alpha.release.flatcar-linux.net/amd64-usr/1576.1.0/flatcar_production_openstack_image.img.bz2`.
+
+It is recommended that you also verify files using the [Flatcar Container Linux Image Signing Key][signing-key]. The GPG signature for each image is a detached `.sig` file that must be passed to `gpg --verify`. For example:
+
+```shell
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2.sig
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+```
+
+The signing key is rotated annually. We will announce upcoming rotations of the signing key on the [user mailing list][flatcar-user].
+
+[alpha-bucket]: https://alpha.release.flatcar-linux.net/amd64-usr/
+[beta-bucket]: https://beta.release.flatcar-linux.net/amd64-usr/
+[stable-bucket]: https://stable.release.flatcar-linux.net/amd64-usr/
+[signing-key]: https://www.flatcar.org/security/image-signing-key/
+[flatcar-user]: https://groups.google.com/forum/#!forum/flatcar-linux-user
+
+## Image customization
+
+There are two predominant ways that a Flatcar Container Linux image can be easily customized for a specific operating environment: through Ignition, a first-boot provisioning tool that runs during a machine's boot process, and through [cloud-config](https://github.com/flatcar/coreos-cloudinit/blob/master/Documentation/cloud-config.md), an older tool that runs every time a machine boots.
+
+### Ignition
+
+[Ignition][ignition] is a tool that acquires a JSON config file when a machine first boots, and uses this config to perform tasks such as formatting disks, creating files, modifying and creating users, and adding systemd units. How Ignition acquires this config file varies per-platform, and it is highly recommended that providers ensure Ignition supports their respective platform. In addition to providers supported by [upstream Ignition][ign-platforms], Flatcar [supports](https://github.com/flatcar/scripts/blob/main/sdk_container/src/third_party/coreos-overlay/sys-apps/ignition/files/0018-revert-internal-oem-drop-noop-OEMs.patch) cloudsigma, hyperv, interoute, niftycloud, rackspace[-onmetal], and vagrant.
+
+Use Ignition to handle platform specific configuration such as custom networking, running an agent on the machine, or injecting files onto disk. To do this, place an Ignition config at `/usr/share/oem/base/base.ign` and it will be prepended to the user provided config. In addition, any config placed at `/usr/share/oem/base/default.ign` will be executed if a user config is not found. On platforms that support cloud-config, use this feature to run coreos-cloudinit when no Ignition config is provided.
+
+Additionally, it is recommended that providers ensure that [Afterburn][coreos-metadata] has support for their platform. This will allow a nicer user experience, as Afterburn will be able to install users' ssh keys and users will be able to reference metadata variables in their systemd units.
+
+[ignition]: ../../provisioning/ignition
+[coreos-metadata]: https://github.com/coreos/afterburn/
+
+### Cloud config
+
+A Flatcar Container Linux image can also be customized using [cloud-config](https://github.com/flatcar/coreos-cloudinit/blob/master/Documentation/cloud-config.md). Users are recommended to instead use Butane Configs (that are converted into Ignition configs with [`butane`][butane-configs]), for reasons [outlined in the blog post that introduced Ignition][ignition-blog].
+
+Providers that previously supported cloud-config should continue to do so, as not all users have switched over to Butane Configs. New platforms do not need to support cloud-config.
+
+Flatcar Container Linux will automatically parse and execute `/usr/share/oem/cloud-config.yml` if it exists.
+
+[ignition-blog]: https://www.toddpigram.com/2016/04/introducing-ignition-new-coreos-machine.html
+[butane-configs]: ../../provisioning/config-transpiler
+
+## Handling end-user Ignition files
+
+End-users should be able to provide an Ignition file to your platform while specifying their VM's parameters. This file should be made available to Flatcar Container Linux at the time of boot (e.g. at known network address, injected directly onto disk). Examples of these data sources can be found in the [Ignition documentation][ign-platforms].
+
+[ign-platforms]: https://github.com/coreos/ignition/blob/main/docs/supported-platforms.md
diff --git a/content/docs/latest/installing/community-platforms/rackspace.md b/content/docs/latest/installing/community-platforms/rackspace.md
new file mode 100644
index 00000000..4fd2a775
--- /dev/null
+++ b/content/docs/latest/installing/community-platforms/rackspace.md
@@ -0,0 +1,229 @@
+---
+title: Running Flatcar Container Linux on Rackspace
+linktitle: Running on Rackspace
+weight: 10
+aliases:
+ - ../../os/booting-on-rackspace
+ - ../../community-platforms/booting-on-rackspace
+---
+
+These instructions will walk you through running Flatcar Container Linux on the Rackspace OpenStack cloud, which differs slightly from the generic OpenStack instructions. There are two ways to launch a Flatcar Container Linux cluster: launch an entire cluster with Heat or launch machines with Nova.
+
+## Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
The following command can be used to determine the image IDs for Alpha:
+
supernova production image-list | grep 'Flatcar Container Linux (Alpha)'
+
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
The following command can be used to determine the image IDs for Beta:
+
supernova production image-list | grep 'Flatcar Container Linux (Beta)'
+
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
The following command can be used to determine the image IDs for Stable:
+
supernova production image-list | grep 'Flatcar Container Linux (Stable)'
+
+
+
+
+
+## Cloud-config
+
+Flatcar Container Linux allows you to configure machine parameters, launch systemd units on startup and more via cloud-config. Jump over to the [docs to learn about the supported features][cloud-config-docs]. Cloud-config is intended to bring up a cluster of machines into a minimal useful state and ideally shouldn't be used to configure anything that isn't standard across many hosts. Once a machine is created on Rackspace, the cloud-config can't be modified.
+
+You can provide cloud-config data via both Heat and Nova APIs. You **cannot** provide cloud-config via the Control Panel. If you launch machines via the UI, you will have to do all configuration manually.
+
+The most common Rackspace cloud-config looks like:
+
+```yaml
+#cloud-config
+
+flatcar:
+ etcd2:
+ # generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
+ # specify the initial size of your cluster with ?size=X
+ discovery: https://discovery.etcd.io/
+ # multi-region and multi-cloud deployments need to use $public_ipv4
+ advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
+ initial-advertise-peer-urls: http://$private_ipv4:2380
+ # listen on both the official ports and the legacy ports
+ # legacy ports can be omitted if your application doesn't depend on them
+ listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
+ listen-peer-urls: http://$private_ipv4:2380
+ units:
+ - name: etcd2.service
+ command: start
+ - name: fleet.service
+ command: start
+```
+
+The `$private_ipv4` and `$public_ipv4` substitution variables are fully supported in cloud-config on Rackspace.
+
+[cloud-config-docs]: https://github.com/flatcar/coreos-cloudinit/blob/master/Documentation/cloud-config.md
+
+### Mount data disk
+
+Certain server flavors have separate system and data disks. To utilize the data disks, they must be mounted with a `.mount` unit. Check to make sure the `Where=` parameter accurately reflects the location of the block device:
+
+```yaml
+#cloud-config
+flatcar:
+ units:
+ - name: media-data.mount
+ command: start
+ content: |
+ [Mount]
+ What=/dev/disk/by-label/FSLABEL
+ Where=/media/data
+ Type=ext3
+```
+
+Mounting Cloud Block Storage can be done with a mount unit, but should not be included in cloud-config unless the disk is present on the first boot.
+
+For more general information, check out [mounting storage on Flatcar Container Linux][mounting-storage].
+
+## Launch with Nova
+
+We're going to install `rackspace-novaclient`, upload a keypair and boot the image id from above.
+
+### Install Supernova tool
+
+The Supernova tool requires Python and `pip`, a Python package manager. If you don't have `pip` installed, install it by running `sudo easy_install pip`. Now let's use `pip` to install Supernova, a tool that lets you easily switch Rackspace regions. Be sure to install these in the order listed:
+
+```shell
+sudo pip install keyring
+sudo pip install rackspace-novaclient
+sudo pip install supernova
+```
+
+### Store account information
+
+Edit your config file (`~/.supernova`) to store your Rackspace username, API key (referenced as `OS_PASSWORD`) and some other settings. The `OS_TENANT_NAME` should be set to your Rackspace account ID, which can be found by clicking on your Rackspace username in the upper right-hand corner of the cloud control panel UI.
+
+```ini
+[production]
+OS_AUTH_URL = https://identity.api.rackspacecloud.com/v2.0/
+OS_USERNAME = username
+OS_PASSWORD = fd62afe2-4686-469f-9849-ceaa792c55a6
+OS_TENANT_NAME = 123456
+OS_REGION_NAME = DFW
+OS_AUTH_SYSTEM = rackspace
+```
+
+We're ready to create a keypair then boot a server with it.
+
+### Create keypair
+
+For this guide, I'm assuming you already have a public key you use for your Flatcar Container Linux servers. Note that only RSA keypairs are supported. Load the public key to Rackspace:
+
+```shell
+supernova production keypair-add --pub-key ~/.ssh/flatcar.pub flatcar-key
+```
+
+Check you make sure the key is in your list by running `supernova production keypair-list`
+
+```sell
++------------+--------------------------------------------------+
+| Name | Fingerprint |
++------------+--------------------------------------------------+
+| flatcar-key | d0:6b:d8:3a:3e:6a:52:43:32:bc:01:ea:c2:0f:49:59 |
++------------+--------------------------------------------------+
+```
+
+### Boot a server
+
+
+
+
+
+
Boot a new Cloud Server with our new keypair and specify optional cloud-config data:
+
supernova production boot --image <image-id> --flavor performance1-2 --key-name flatcar-key --user-data ~/cloud_config.yml --config-drive true My_Flatcar_Server
+
Boot a new OnMetal Server with our new keypair and specify optional cloud-config data:
+
supernova production boot --image <image-id> --flavor onmetal-compute1 --key-name flatcar-key --user-data ~/cloud_config.yml --config-drive true My_Flatcar_Server
+
+
+
Boot a new Cloud Server with our new keypair and specify optional cloud-config data:
+
supernova production boot --image <image-id> --flavor performance1-2 --key-name flatcar-key --user-data ~/cloud_config.yml --config-drive true My_Flatcar_Server
+
+
+
Boot a new Cloud Server with our new keypair and specify optional cloud-config data:
+
supernova production boot --image <image-id> --flavor performance1-2 --key-name flatcar-key --user-data ~/cloud_config.yml --config-drive true My_Flatcar_Server
+
+
+
+
+You should now see the details of your new server in your terminal and it should also show up in the control panel:
+
+```shell
++------------------------+--------------------------------------+
+| Property | Value |
++------------------------+--------------------------------------+
+| status | BUILD |
+| updated | 2013-11-02T19:43:45Z |
+| hostId | |
+| key_name | flatcar-key |
+| image | Flatcar Container Linux |
+| OS-EXT-STS:task_state | scheduling |
+| OS-EXT-STS:vm_state | building |
+| flavor | 512MB Standard Instance |
+| id | 82dbe66d-0762-4cba-a286-8c1af8431e47 |
+| user_id | 3c55bca772ba4a4bb6a4eb5b25754738 |
+| name | My_Flatcar_Server |
+| adminPass | mgNqEx7I9pQA |
+| tenant_id | 833111 |
+| created | 2013-11-02T19:43:44Z |
+| OS-DCF:diskConfig | MANUAL |
+| accessIPv4 | |
+| accessIPv6 | |
+| progress | 0 |
+| OS-EXT-STS:power_state | 0 |
+| metadata | {} |
++------------------------+--------------------------------------+
+```
+
+### Launching more servers
+
+To launch more servers and have them join your cluster, simply provide the same cloud-config.
+
+## Launch via control panel
+
+You can also launch servers with either the `alpha` and `beta` channel versions via the web-based Control Panel, although you can't provide cloud-config via the UI. To do so:
+
+ 1. Log into your Rackspace Control Panel
+ 2. Click on 'Servers'
+ 3. Click on 'Create Server'
+ 4. Choose server name and region
+ 5. Click on 'Linux', then on 'Flatcar Container Linux' and finally choose '(alpha)' or '(beta)' version
+ 6. Choose flavor and use 'Advanced Options' to select SSH Key -- if available
+ 7. Click on 'Create Server'
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+[mounting-storage]: ../../setup/storage/mounting-storage
diff --git a/content/docs/latest/installing/community-platforms/vultr.md b/content/docs/latest/installing/community-platforms/vultr.md
new file mode 100644
index 00000000..a21f9656
--- /dev/null
+++ b/content/docs/latest/installing/community-platforms/vultr.md
@@ -0,0 +1,143 @@
+---
+title: Running Flatcar Container Linux on a Vultr VPS
+linktitle: Running on a Vultr VPS
+weight: 10
+aliases:
+ - ../../os/booting-on-vultr
+ - ../../community-platforms/booting-on-vultr
+---
+
+These instructions will walk you through running a single Flatcar Container Linux node. This guide assumes:
+
+* You have an account at [Vultr.com](https://www.vultr.com).
+* You have a public + private key combination generated. Here's a helpful guide if you need to generate these keys: [How to set up SSH keys](https://help.github.com/articles/generating-ssh-keys).
+
+The simplest option to boot up Flatcar Container Linux is to select the "Flatcar Container Linux Stable" operating system from Vultr's default offerings. However, most deployments require a custom `cloud-config`, which can only be achieved in Vultr with an iPXE script. The remainder of this article describes this process.
+
+## Cloud-config
+
+First, you'll need to make a shell script containing your `cloud-config` available at a public URL:
+
+`cloud-config-bootstrap.sh`:
+
+```shell
+#!/bin/bash
+
+cat > "cloud-config.yaml" <
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+
A sample script will look like this:
+
+
#!ipxe
+
+# Location of your shell script.
+set cloud-config-url http://example.com/cloud-config-bootstrap.sh
+
+set base-url https://alpha.release.flatcar-linux.net/amd64-usr/current
+kernel ${base-url}/flatcar_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
+initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
+boot
+
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+
A sample script will look like this:
+
+
#!ipxe
+
+# Location of your shell script.
+set cloud-config-url http://example.com/cloud-config-bootstrap.sh
+
+set base-url https://beta.release.flatcar-linux.net/amd64-usr/current
+kernel ${base-url}/flatcar_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
+initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
+boot
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+
A sample script will look like this:
+
+
#!ipxe
+
+# Location of your shell script.
+set cloud-config-url http://example.com/cloud-config-bootstrap.sh
+
+set base-url https://stable.release.flatcar-linux.net/amd64-usr/current
+kernel ${base-url}/flatcar_production_pxe.vmlinuz cloud-config-url=${cloud-config-url}
+initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
+boot
+
+
+
+
+Go to My Servers > Startup Scripts > Add Startup Script, select type "PXE", and input your script. Be sure to replace the cloud-config-url with that of the shell script you created above.
+
+Additional reading can be found at [Booting Flatcar Container Linux with iPXE][booting-with-ipxe] and [Embedded scripts for iPXE](http://ipxe.org/embed).
+
+## Create the VPS
+
+Create a new VPS (any server type and location of your choice), and then:
+
+1. For the "Operating System" select "Custom"
+2. Select "iPXE Custom Script" and the script you created above.
+3. Click "Place Order"
+
+Once you receive the "Subscription Activated" email the VPS will be ready to use.
+
+## Accessing the VPS
+
+You can now log in to Flatcar Container Linux using the associated private key on your local computer. You may need to specify its location using `-i LOCATION`. If you need additional details on how to specify the location of your private key file see [here](http://www.cyberciti.biz/faq/force-ssh-client-to-use-given-private-key-identity-file/).
+
+SSH to the IP of your VPS, and specify the "core" user: `ssh core@IP`
+
+```shell
+$ ssh core@IP
+The authenticity of host 'IP (2a02:1348:17c:423d:24:19ff:fef1:8f6)' can't be established.
+RSA key fingerprint is 99:a5:13:60:07:5d:ac:eb:4b:f2:cb:c9:b2:ab:d7:21.
+Are you sure you want to continue connecting (yes/no)? yes
+Warning: Permanently added '[IP]' (ED25519) to the list of known hosts.
+Enter passphrase for key '/home/user/.ssh/id_rsa':
+Flatcar Container Linux stable (557.2.0)
+core@localhost ~ $
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+[booting-with-ipxe]: ../../installing/bare-metal/booting-with-ipxe
diff --git a/content/docs/latest/installing/customizing-the-image/_index.md b/content/docs/latest/installing/customizing-the-image/_index.md
new file mode 100644
index 00000000..776de56f
--- /dev/null
+++ b/content/docs/latest/installing/customizing-the-image/_index.md
@@ -0,0 +1,7 @@
+---
+title: Customizing the image
+description: >
+ This section provides information and guidance on customizing
+ Flatcar images by placing files on the root or OEM filesystem or embedding an Ignition config.
+weight: 20
+---
diff --git a/content/docs/latest/installing/customizing-the-image/customize-the-image.md b/content/docs/latest/installing/customizing-the-image/customize-the-image.md
new file mode 100644
index 00000000..d1688ddf
--- /dev/null
+++ b/content/docs/latest/installing/customizing-the-image/customize-the-image.md
@@ -0,0 +1,99 @@
+---
+title: Customizing a Flatcar image
+weight: 30
+---
+
+While [Ignition][ignition] cloud instance userdata is the preferred way of customizing an installation, it can be limiting when the customization concerns the kernel boot arguments or when no cloud instance userdata mechanism is in place.
+The partition with the OS `/usr` filesystem can't be modified because it is signed and gets auto-updated.
+Other partitions like the boot partition, the OEM partition, or even the root partition are open for customization.
+The boot partition can hold an additional EFI boot loader, the OEM partition can hold a GRUB file for the kernel arguments and possibly a default and/or base Ignition configuration, the root partition can hold the OS configuration and additional binaries.
+**Note:** Important is that you never boot the image because the first-boot initialization would make all your instances identical, causing problems with the update server, skips the regeneration of SSH host keys, and prevents Ignition from running. In case you have to do so for running Packer and/or Ansible, see the last section for common problems.
+
+## Mounting a partition for customization
+
+The generic Flatcar Container Linux [image](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_image.bin.bz2) (`.bin`) can be directly attached as loop device on a Linux host and mounted after decompression with `bunzip2` or `lbunzip2`. The partition to modify needs to be specified by its number:
+
+```shell
+# PART can be 1 (boot), 6 (OEM), 9 (ROOT)
+PART=1
+LOOP=$(sudo losetup --partscan --find --show flatcar_production_image.bin)
+TARGET=$(sudo mktemp -d -p /mnt --suffix -flatcar)
+sudo mount "${LOOP}p${PART}" "$TARGET"
+# Now do your changes on "$TARGET"...
+# Cleanup:
+sudo umount "${TARGET}"
+sudo rmdir "${TARGET}"
+sudo losetup -d "${LOOP}"
+```
+
+If you need to modify the QEMU `qcow2` image or an `vmdk` image, you need to either convert it first to a raw image:
+
+```shell
+qemu-img convert -f qcow2 -O raw flatcar_production_qemu_image.img flatcar_production_qemu_image.bin
+```
+
+Or you need to use the `guestmount` utility (from [libguestfs](https://libguestfs.org/)), it can run as regular user:
+
+```
+# PART can be 1 (boot), 6 (OEM), 9 (ROOT)
+PART=1
+TARGET=$(mktemp -d -p /tmp --suffix -flatcar)
+guestmount -m "/dev/sda${PART}" -a flatcar_production_qemu_image.img "$TARGET"
+# Now do your changes on "$TARGET"...
+guestunmount "$TARGET"
+rmdir "${TARGET}"
+```
+
+In case you converted the raw image for regular loop device mounting, you can also convert it back to qcow2 with `qemu-img convert -O qcow2 INPUT OUTPUT`.
+
+### Example for legacy cgroup AMIs
+
+An example script that generates CGroup V1 AWS EC2 images to be uploaded as AMIs can be found [here](https://raw.githubusercontent.com/flatcar/flatcar-docs/main/create_cgroupv1_ami.sh).
+
+### Customizing the boot partition
+
+Using the above command the boot partition with the EFI binaries can be mounted to place additional firmware on it, e.g., [Raspberry Pi 4 UEFI Firmware](https://github.com/pftf/RPi4/releases/) or similar.
+
+### Customizing the OEM partition
+
+The OEM partition is the most common place for modifications, it also is what makes the various offered Flatcar cloud images different because it can hold mandatory and/or fallback Ignition configurations, and the `grub.cfg` file for kernel arguments.
+
+The OEM partition is also useful to force a particular Ignition configuration to be used.
+For example, `flatcar-install` offers to write a `config.ign` Ignition file to the OEM partition through the `-i` flag.
+This file is used as preferred Ignition configuration even when Ignition cloud instance userdata is present. With the special `oem:///` file URL the config can copy files from the OEM Partition to the root filesystem (note: in case you have many binaries, the OEM partition may be too small and you have to either host them somewhere or place them directly on the root filesystem, see the next section).
+
+As done on most offered Flatcar cloud images, two additional Ignition files can be placed on the OEM partition and have broader purpose, independent of whether a `config.ign` Ignition file is used, the Ignition kernel command line URL, or Ignition cloud instance userdata.
+The first is `base/base.ign` which is always executed as basic mandatory setup.
+The second file `base/default.ign` has a special fallback function and gets executed only if the found instance userdata is not Ignition JSON.
+The common content of the file is to define a systemd service via Ignition that runs `coreos-cloudinit` to process the instance userdata later.
+Good examples are [`base/base.ign`](https://github.com/flatcar/coreos-overlay/blob/ad9c06df2c34be3c6d50ffb80f886bdae10b4809/coreos-base/oem-packet/files/base/base.ign) and [`base/default.ign`](https://github.com/flatcar/coreos-overlay/blob/ad9c06df2c34be3c6d50ffb80f886bdae10b4809/coreos-base/oem-packet/files/base/default.ign) files used for Equinix Metal images as they also make use of the `oem:///` source URL to refer to a file placed on the OEM partition.
+
+The `grub.cfg` file gets sourced by GRUB to set up the OEM ID which is used by systemd units to be started conditionally, or to set up kernel parameters like the Ignition config URL (`ignition.config.url`, to fetch the preferred config remotely), or settings required for the hardware.
+Again, a good example is the [`grub.cfg` file](https://github.com/flatcar/coreos-overlay/blob/ad9c06df2c34be3c6d50ffb80f886bdae10b4809/coreos-base/oem-packet/files/grub.cfg) used for Equinix Metal images to set the OEM ID and the kernel parameter `flatcar.autologin` to be able to use the serial console without having to configure a user password.
+
+### Customizing the root partition
+
+To pre-configure the OS you can place binaries and configuration files directly on the root filesystem.
+The recommended way, however, is to use a `base/base.ign` or `config.ign` Ignition file in the OEM partition.
+The advantage is that a `base/base.ign` file even works when the user has the root filesystem recreation option specified in Ignition which reformats the root filesystem and discards any changes placed there directly.
+
+When modifying the root filesystem you should make sure that you only copy files over that are safe to copy, e.g., you can place binaries into `/opt/bin` or configuration files under `/etc` but you shouldn't initialize the root filesystem by booting it, even with a chroot (and calling `systemctl` there or even booting it up as container because this leads to the traps described in the next section, be aware).
+When you place systemd services under `/etc/systemd/system/my.service` and they have `WantedBy=multi-user.target` in the `[Install]` section you can pre-enable them with a symlink from `/etc/systemd/system/multi-user.target.wants/my.service` to `/etc/systemd/system/my.service`.
+
+You can even pre-populate the container image story by copying the folders `/var/lib/docker` and `/var/lib/containerd` over from a booted Flatcar instance.
+
+## Customization through booting with Packer, VMware base VMs, or chroot/systemd-nspawn
+
+This section serves as a big warning. If you use a booted image, even if it was only booted by being a chroot or a systemd-nspawn container, you will get a lot of problems.
+Please check the OEM and the root partition section above for a saner way of pre-configuring the image.
+If you try to use Packer to customize the image, or want to use a once booted VMware base VM, or even just accidentially booted the image once for testing, you created an OS state that is hard to get rid off.
+It causes security issues and difficult to debug behavior changes, please use the above mechanisms to modify the image through mounting and copying because this is easier, safer, and faster.
+
+If you still want to continue with customization through booting, here are some common traps, but there can be more depending on the software components that are involved and if you are not an expert on the software components and their respective state files, you should reconsider your choice.
+The first and easiest problem is that the `/boot/flatcar/first_boot` flag file is lost which normally triggers Ignition to run on first boot. You would have to recreate this file.
+More tricky is the `/etc/machine-id` file which you have to delete, not only truncate, because this file is used not only for the identification of the instance but also to trigger systemd first-boot semantics which take care of enabling services through presets. The machine ID must also be unique for the update server to work correctly, otherwise it will not hand out updates to your instance.
+Another problem are the generated SSH host keys which you have to delete, otherwise each instance base on this image will have the same host keys and once the image is accessible everyone can impersonate your servers.
+More problems come with weak account credentials used for the setup, e.g., when you have a dummy account with a password you have to remove the account again, and if you set up dummy SSH keys for the `core` user as common with Vagrant, you have to remove them, too. If for bootstrapping you used a `config.ign` file in the OEM partition, this, too, has to be removed.
+You can have a look at the [`image-builder`](https://github.com/kubernetes-sigs/image-builder) Packer and Ansible configuration which avoids most of the common pitfalls but, again, this is not a complete list because it depends on the software components you interact with.
+
+[ignition]: ../../provisioning/ignition
diff --git a/content/docs/latest/installing/vms/_index.md b/content/docs/latest/installing/vms/_index.md
new file mode 100644
index 00000000..fd010d10
--- /dev/null
+++ b/content/docs/latest/installing/vms/_index.md
@@ -0,0 +1,7 @@
+---
+title: Virtual Machines
+description: >
+ This section provides information and guidance on running Flatcar
+ instances on virtual machines.
+weight: 20
+---
diff --git a/content/docs/latest/installing/vms/libvirt.md b/content/docs/latest/installing/vms/libvirt.md
new file mode 100644
index 00000000..cf5e88e8
--- /dev/null
+++ b/content/docs/latest/installing/vms/libvirt.md
@@ -0,0 +1,455 @@
+---
+title: Running Flatcar Container Linux on libvirt
+linktitle: Running on libvirt
+weight: 30
+aliases:
+ - ../../os/booting-with-libvirt
+ - ../../cloud-providers/booting-with-libvirt
+---
+
+This guide explains how to run Flatcar Container Linux with libvirt using the QEMU driver. The libvirt configuration
+file can be used (for example) with `virsh` or `virt-manager`. The guide assumes
+that you already have a running libvirt setup and `virt-install` tool. If you
+don’t have that, other solutions are most likely easier.
+At the end of the document there are instructions for deploying with Terraform.
+
+You can direct questions to the [IRC channel][irc] or [mailing list][flatcar-dev].
+
+## Download the Flatcar Container Linux image
+
+In this guide, the example virtual machine we are creating is called flatcar-linux1 and
+all files are stored in `/var/lib/libvirt/images/flatcar-linux`. This is not a requirement — feel free
+to substitute that path if you use another one.
+
+### Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
We start by downloading the most recent disk image:
+
+mkdir -p /var/lib/libvirt/images/flatcar-linux
+cd /var/lib/libvirt/images/flatcar-linux
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2{,.sig}
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+bunzip2 flatcar_production_qemu_image.img.bz2
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
We start by downloading the most recent disk image:
+
+mkdir -p /var/lib/libvirt/images/flatcar-linux
+cd /var/lib/libvirt/images/flatcar-linux
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2{,.sig}
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+bunzip2 flatcar_production_qemu_image.img.bz2
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
We start by downloading the most recent disk image:
+
+mkdir -p /var/lib/libvirt/images/flatcar-linux
+cd /var/lib/libvirt/images/flatcar-linux
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2{,.sig}
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+bunzip2 flatcar_production_qemu_image.img.bz2
+
+
+
+
+## Virtual machine configuration
+
+Now create a qcow2 image snapshot using the command below:
+
+```shell
+cd /var/lib/libvirt/images/flatcar-linux
+qemu-img create -f qcow2 -F qcow2 -b flatcar_production_qemu_image.img flatcar-linux1.qcow2
+```
+
+This will create a `flatcar-linux1.qcow2` snapshot image. Any changes to `flatcar-linux1.qcow2` will not be reflected in `flatcar_production_qemu_image.img`. Making any changes to a base image (`flatcar_production_qemu_image.img` in our example) will corrupt its snapshots.
+
+### Ignition config
+
+The preferred way to configure a Flatcar Container Linux machine is via Ignition.
+
+#### Create the Ignition config
+
+Typically you won't write Ignition files yourself, rather you will typically use a tool like the [config transpiler][config-transpiler] to generate them.
+
+However the Ignition file is created, it should be placed in a location which qemu can access. In this example, we'll place it in `/var/lib/libvirt/flatcar-linux/flatcar-linux1/provision.ign`.
+
+Here, for example, we create an empty Ignition config that contains no further declarations besides its specification version:
+
+```shell
+mkdir -p /var/lib/libvirt/flatcar-linux/flatcar-linux1/
+echo '{"ignition":{"version":"2.0.0"}}' > /var/lib/libvirt/flatcar-linux/flatcar-linux1/provision.ign
+```
+
+If the host uses SELinux, allow the VM access to the config:
+
+```shell
+semanage fcontext -a -t virt_content_t "/var/lib/libvirt/flatcar-linux/flatcar-linux1"
+restorecon -R "/var/lib/libvirt/flatcar-linux/flatcar-linux1"
+```
+
+If the host uses AppArmor, allow `qemu` to access the config files:
+
+```shell
+echo " # For ignition files" >> /etc/apparmor.d/abstractions/libvirt-qemu
+echo " /var/lib/libvirt/flatcar-linux/** r," >> /etc/apparmor.d/abstractions/libvirt-qemu
+```
+
+Since the empty Ignition config is not very useful, here is an example how to write a simple Butane Config to add your ssh keys and write a hostname file:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/hostname
+ contents:
+ inline: "flatcar-linux1"
+
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
+```
+
+Assuming that you save this as `example.yaml` (and replace the dummy key with public key), you can convert it to an Ignition config with the [config transpiler][config-transpiler].
+Here we run it from a Docker image:
+
+```shell
+cat example.yaml | docker run --rm -i quay.io/coreos/butane:release > /var/lib/libvirt/flatcar-linux/flatcar-linux1/provision.ign
+```
+
+#### Creating the domain
+
+Once the Ignition file exists on disk, the machine can be configured and started:
+
+```shell
+virt-install --connect qemu:///system \
+ --import \
+ --name flatcar-linux1 \
+ --ram 1024 --vcpus 1 \
+ --os-type=generic \
+ --disk path=/var/lib/libvirt/images/flatcar-linux/flatcar-linux1.qcow2,format=qcow2,bus=virtio \
+ --vnc --noautoconsole \
+ --qemu-commandline='-fw_cfg name=opt/org.flatcar-linux/config,file=/var/lib/libvirt/flatcar-linux/flatcar-linux1/provision.ign'
+```
+
+#### SSH into the machine
+
+By default, libvirt runs its own DHCP server which will provide an IP address to new instances. You can query it for what IP addresses have been assigned to machines:
+
+```shell
+$ virsh net-dhcp-leases default
+Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
+-------------------------------------------------------------------------------------------------------------------
+ 2017-08-09 16:32:52 52:54:00:13:12:45 ipv4 192.168.122.184/24 flatcar-linux1 ff:32:39:f9:b5:00:02:00:00:ab:11:06:6a:55:ed:5d:0a:73:ee
+```
+
+
+To SSH into:
+
+```
+ssh core@192.168.122.184
+```
+
+### Network configuration
+
+#### Static IP
+
+By default, Flatcar Container Linux uses DHCP to get its network configuration. In this example the VM will be attached directly to the local network via a bridge on the host's virbr0 and the local network. To configure a static address add a [networkd unit][systemd-network] to the Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq.......
+
+storage:
+ files:
+ - path: /etc/hostname
+ contents:
+ inline: flatcar-linux1
+ - path: /etc/systemd/network/10-ens3.network
+ contents:
+ inline: |
+ [Match]
+ MACAddress=52:54:00:fe:b3:c0
+
+ [Network]
+ Address=192.168.122.2
+ Gateway=192.168.122.1
+ DNS=8.8.8.8
+```
+
+[systemd-network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html
+
+#### Using DHCP with a libvirt network
+
+An alternative to statically configuring an IP at the host level is to do so at the libvirt level. If you're using libvirt's built in DHCP server and a recent libvirt version, it allows configuring what IP address will be provided to a given machine ahead of time.
+
+This can be done using the `net-update` command. The following assumes you're using the `default` libvirt network and have configured the MAC Address to `52:54:00:fe:b3:c0` through the `--network` flag on `virt-install`:
+
+```shell
+ip="192.168.122.2"
+mac="52:54:00:fe:b3:c0"
+
+virsh net-update --network "default" add-last ip-dhcp-host \
+ --xml " " \
+ --live --config
+```
+
+By executing these commands before running `virsh start`, we can ensure the libvirt DHCP server will hand out a known IP.
+
+### SSH Config
+
+To simplify this and avoid potential host key errors in the future add the following to `~/.ssh/config`:
+
+```ini
+Host flatcar-linux1
+HostName 192.168.122.2
+User core
+StrictHostKeyChecking no
+UserKnownHostsFile /dev/null
+```
+
+Now you can log in to the virtual machine with:
+
+```shell
+ssh flatcar-linux1
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+## Terraform
+
+The [`libvirt` Terraform Provider](https://github.com/dmacvicar/terraform-provider-libvirt/) allows to quickly deploy machines in a declarative way.
+This is especially useful for local development of a configuration that is also in use on a cloud provider.
+Read more about using Terraform and Flatcar [here](../../provisioning/terraform/).
+
+The following Terraform v0.13 module may serve as a base for your own setup.
+A new disk volume pool will be created in `/var/tmp` as precaution to not modify the base image by accident.
+
+First, prepare the base image and make sure you don't boot it via the [`flatcar_production_qemu.sh`](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh) script or similar:
+
+```sh
+cd ~/Downloads
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+bunzip2 flatcar_production_qemu_image.img.bz2
+mv flatcar_production_qemu_image.img flatcar_production_qemu_image-libvirt-import.img
+# optional, increase the image by 5 GB:
+qemu-img resize flatcar_production_qemu_image-libvirt-import.img +5G
+```
+
+It will only be used once for the import and can be deleted afterwards even when new VMs are added.
+
+Start with a `libvirt-machines.tf` file that contains the main declarations:
+
+```
+terraform {
+ required_version = ">= 0.13"
+ required_providers {
+ libvirt = {
+ source = "dmacvicar/libvirt"
+ version = "0.6.3"
+ }
+ ct = {
+ source = "poseidon/ct"
+ version = "0.7.1"
+ }
+ template = {
+ source = "hashicorp/template"
+ version = "~> 2.2.0"
+ }
+ }
+}
+
+provider "libvirt" {
+ uri = "qemu:///system"
+}
+
+resource "libvirt_pool" "volumetmp" {
+ name = "${var.cluster_name}-pool"
+ type = "dir"
+ path = "/var/tmp/${var.cluster_name}-pool"
+}
+
+resource "libvirt_volume" "base" {
+ name = "flatcar-base"
+ source = var.base_image
+ pool = libvirt_pool.volumetmp.name
+ format = "qcow2"
+}
+
+resource "libvirt_volume" "vm-disk" {
+ for_each = toset(var.machines)
+ # workaround: depend on libvirt_ignition.ignition[each.key], otherwise the VM will use the old disk when the user-data changes
+ name = "${var.cluster_name}-${each.key}-${md5(libvirt_ignition.ignition[each.key].id)}.qcow2"
+ base_volume_id = libvirt_volume.base.id
+ pool = libvirt_pool.volumetmp.name
+ format = "qcow2"
+}
+
+resource "libvirt_ignition" "ignition" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}-ignition"
+ pool = libvirt_pool.volumetmp.name
+ content = data.ct_config.vm-ignitions[each.key].rendered
+}
+
+resource "libvirt_domain" "machine" {
+ for_each = toset(var.machines)
+ name = "${var.cluster_name}-${each.key}"
+ vcpu = var.virtual_cpus
+ memory = var.virtual_memory
+
+ fw_cfg_name = "opt/org.flatcar-linux/config"
+ coreos_ignition = libvirt_ignition.ignition[each.key].id
+
+ disk {
+ volume_id = libvirt_volume.vm-disk[each.key].id
+ }
+
+ graphics {
+ listen_type = "address"
+ }
+
+ # dynamic IP assignment on the bridge, NAT for Internet access
+ network_interface {
+ network_name = "default"
+ wait_for_lease = true
+ }
+}
+
+data "ct_config" "vm-ignitions" {
+ for_each = toset(var.machines)
+ content = data.template_file.vm-configs[each.key].rendered
+}
+
+data "template_file" "vm-configs" {
+ for_each = toset(var.machines)
+ template = file("${path.module}/machine-${each.key}.yaml.tmpl")
+
+ vars = {
+ ssh_keys = jsonencode(var.ssh_keys)
+ name = each.key
+ }
+}
+```
+
+Create a `variables.tf` file that declares the variables used above:
+
+```
+variable "machines" {
+ type = list(string)
+ description = "Machine names, corresponding to machine-NAME.yaml.tmpl files"
+}
+
+variable "cluster_name" {
+ type = string
+ description = "Cluster name used as prefix for the machine names"
+}
+
+variable "ssh_keys" {
+ type = list(string)
+ description = "SSH public keys for user 'core'"
+}
+
+variable "base_image" {
+ type = string
+ description = "Path to unpacked Flatcar Container Linux image flatcar_production_qemu_image.img (probably after a qemu-img resize IMG +5G)"
+}
+
+variable "virtual_memory" {
+ type = number
+ default = 2048
+ description = "Virtual RAM in MB"
+}
+
+variable "virtual_cpus" {
+ type = number
+ default = 1
+ description = "Number of virtual CPUs"
+}
+```
+
+An `outputs.tf` file shows the resulting IP addresses:
+
+```
+output "ip-addresses" {
+ value = {
+ for key in var.machines :
+ "${var.cluster_name}-${key}" => libvirt_domain.machine[key].network_interface.0.addresses.*
+ }
+ # or instead of outputs, use dig CLUSTERNAME-VMNAME @192.168.122.1
+}
+```
+
+Now you can use the module by declaring the variables and a Container Linux Configuration for a machine.
+First create a `terraform.tfvars` file with your settings:
+
+```
+base_image = "file:///home/myself/Downloads/flatcar_production_qemu_image-libvirt-import.img"
+cluster_name = "mycluster"
+machines = ["mynode"]
+virtual_memory = 768
+ssh_keys = ["ssh-rsa AA... me@mail.net"]
+```
+
+Create the configuration for `mynode` in the file `machine-mynode.yaml.tmpl`:
+
+```yaml
+---
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys: ${ssh_keys}
+storage:
+ files:
+ - path: /home/core/works
+ filesystem: root
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/bash
+ set -euo pipefail
+ hostname="$(hostname)"
+ echo My name is ${name} and the hostname is $${hostname}
+```
+
+Finally, run Terraform v0.13 as follows to create the machine:
+
+```
+terraform init
+terraform apply
+```
+
+View the VMs in `virt-manager` where you can see the VGA console.
+Log in via `ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@IPADDRESS` with the printed IP address.
+
+When you make a change to `machine-mynode.yaml.tmpl` and run `terraform apply` again, the instance and its disk will be replaced.
+
+[flatcar-dev]: https://groups.google.com/forum/#!forum/flatcar-linux-dev
+[irc]: irc://irc.freenode.org:6667/#flatcar
+[config-transpiler]: ../../provisioning/config-transpiler
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
diff --git a/content/docs/latest/installing/vms/qemu.md b/content/docs/latest/installing/vms/qemu.md
new file mode 100644
index 00000000..106a85e6
--- /dev/null
+++ b/content/docs/latest/installing/vms/qemu.md
@@ -0,0 +1,189 @@
+---
+title: Running Flatcar Container Linux on QEMU
+title: Running on QEMU
+weight: 30
+aliases:
+ - ../../os/booting-with-qemu
+ - ../../cloud-providers/booting-with-qemu
+---
+
+These instructions will bring up a single Flatcar Container Linux instance under QEMU, the small Swiss Army knife of virtual machine and CPU emulators. If you need to do more such as [configuring networks][qemunet] differently refer to the [QEMU Wiki][qemuwiki] and [User Documentation][qemudoc].
+
+You can direct questions to the [IRC channel][irc] or [mailing list][flatcar-dev].
+
+[qemunet]: http://wiki.qemu.org/Documentation/Networking
+[qemuwiki]: http://wiki.qemu.org/Manual
+[qemudoc]: http://qemu.weilnetz.de/qemu-doc.html
+
+## Install QEMU
+
+In addition to Linux it can be run on Windows and OS X but works best on Linux. It should be available on just about any distro.
+
+### Debian or Ubuntu
+
+Documentation for [Debian][qemudeb] has more details but to get started all you need is:
+
+```shell
+sudo apt-get install qemu-system-x86 qemu-utils
+```
+
+[qemudeb]: https://wiki.debian.org/QEMU
+
+### Fedora or RedHat
+
+The Fedora wiki has a [quick howto][qemufed] but the basic install is easy:
+
+```shell
+sudo yum install qemu-system-x86 qemu-img
+```
+
+[qemufed]: https://fedoraproject.org/wiki/How_to_use_qemu
+
+### Arch
+
+This is all you need to get started:
+
+```shell
+sudo pacman -S qemu
+```
+
+More details can be found on [Arch's QEMU wiki page](https://wiki.archlinux.org/index.php/Qemu).
+
+### Gentoo
+
+As to be expected, Gentoo can be a little more complicated but all the required kernel options and USE flags are covered in the [Gentoo Wiki][qemugen]. Usually this should be sufficient:
+
+```shell
+echo app-emulation/qemu qemu_softmmu_targets_x86_64 virtfs xattr >> /etc/portage/package.use
+emerge -av app-emulation/qemu
+```
+
+[qemugen]: http://wiki.gentoo.org/wiki/QEMU
+## Startup Flatcar Container Linux
+
+Once QEMU is installed you can download and start the latest Flatcar Container Linux image.
+
+### Choosing a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
+
There are two files you need: the disk image (provided in qcow2
+ format) and the wrapper shell script to start QEMU.
+
mkdir flatcar; cd flatcar
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh.sig
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2.sig
+gpg --verify flatcar_production_qemu.sh.sig
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+bzip2 -d flatcar_production_qemu_image.img.bz2
+chmod +x flatcar_production_qemu.sh
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
+
There are two files you need: the disk image (provided in qcow2
+ format) and the wrapper shell script to start QEMU.
+
mkdir flatcar; cd flatcar
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh.sig
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2.sig
+gpg --verify flatcar_production_qemu.sh.sig
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+bzip2 -d flatcar_production_qemu_image.img.bz2
+chmod +x flatcar_production_qemu.sh
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
+
There are two files you need: the disk image (provided in qcow2
+ format) and the wrapper shell script to start QEMU.
+
mkdir flatcar; cd flatcar
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh.sig
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+wget https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2.sig
+gpg --verify flatcar_production_qemu.sh.sig
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+bzip2 -d flatcar_production_qemu_image.img.bz2
+chmod +x flatcar_production_qemu.sh
+
+
+
+
+Starting is as simple as:
+
+```shell
+./flatcar_production_qemu.sh -nographic
+```
+
+### SSH keys
+
+In order to log in to the virtual machine you will need to use ssh keys. If you don't already have a ssh key pair you can generate one simply by running the command `ssh-keygen`. The wrapper script will automatically look for public keys in ssh-agent if available and at the default locations `~/.ssh/id_dsa.pub` or `~/.ssh/id_rsa.pub`. If you need to provide an alternate location use the -a option:
+
+```shell
+./flatcar_production_qemu.sh -a ~/.ssh/authorized_keys -- -nographic
+```
+
+Note: Options such as `-a` for the wrapper script must be specified before any options for QEMU. To make the separation between the two explicit you can use `--` but that isn't required. See `./flatcar_production_qemu.sh -h` for details.
+
+Once the virtual machine has started you can log in via SSH:
+
+```shell
+ssh -l core -p 2222 localhost
+```
+
+### SSH config
+
+To simplify this and avoid potential host key errors in the future add the following to `~/.ssh/config`:
+
+```shell
+Host flatcar
+HostName localhost
+Port 2222
+User core
+StrictHostKeyChecking no
+UserKnownHostsFile /dev/null
+```
+
+Now you can log in to the virtual machine with:
+
+```shell
+ssh flatcar
+```
+
+### Butane Configs
+
+Flatcar Container Linux allows you to configure machine parameters, configure networking, launch systemd units on startup, and more via Butane Configs. These configs are then transpiled into Ignition configs and given to booting machines. Head over to the [docs to learn about the supported features][butane-configs]. An Ignition config can be passed to the virtual machine using the QEMU Firmware Configuration Device. The wrapper script provides a method for doing so:
+
+```shell
+./flatcar_production_qemu.sh -i config.ign -- -nographic
+```
+
+This will pass the contents of `config.ign` through to Ignition, which runs in the virtual machine.
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+[flatcar-dev]: https://groups.google.com/forum/#!forum/flatcar-linux-dev
+[irc]: irc://irc.freenode.org:6667/#flatcar
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/installing/vms/vagrant.md b/content/docs/latest/installing/vms/vagrant.md
new file mode 100644
index 00000000..c676b983
--- /dev/null
+++ b/content/docs/latest/installing/vms/vagrant.md
@@ -0,0 +1,198 @@
+---
+title: Running Flatcar Container Linux on Vagrant
+linktitle: Running on Vagrant
+weight: 30
+aliases:
+ - ../../os/booting-on-vagrant
+ - ../../cloud-providers/booting-on-vagrant
+---
+
+_While we always welcome community contributions and fixes, please note that Vagrant is not an officially supported platform at this time. (See the [platform overview](/#installing-flatcar).)_
+
+Running Flatcar Container Linux with Vagrant is one way to bring up a single machine or virtualize an entire cluster on your laptop. Since the true power of Flatcar Container Linux can be seen with a cluster, we're going to concentrate on that. Instructions for a single machine can be found [towards the end](#single-machine) of the guide.
+
+You can direct questions to the [IRC channel][irc] or [mailing list][flatcar-dev].
+
+## Install Vagrant and VirtualBox
+
+Vagrant is a simple-to-use command line virtual machine manager. There are install packages available for Windows, Linux and OS X. Find the latest installer on the [Vagrant downloads page][vagrant]. Be sure to get version 2.0.4 or greater, to be able to detect Flatcar images correctly.
+
+[vagrant]: http://www.vagrantup.com/downloads.html
+
+Vagrant can use either the free VirtualBox provider or the commercial VMware provider. Instructions for both are below. For the VirtualBox provider, version 4.3.10 or greater is required.
+
+## Install Flatcar Container Linux
+
+You can import the flatcar box and boot it with Vagrant.
+You'll find it in `https://${CHANNEL}.release.flatcar-linux.net/amd64-usr/${VERSION}/flatcar_production_vagrant.box`.
+Make sure you download the signature (it's available in `https://${CHANNEL}.release.flatcar-linux.net/amd64-usr/${VERSION}/flatcar_production_vagrant.box.sig`) and check it before proceeding.
+
+For example, to get the latest alpha:
+
+```shell
+$ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.box
+$ wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_vagrant.box.sig
+$ gpg --verify flatcar_production_vagrant.box.sig
+gpg: assuming signed data in 'flatcar_production_vagrant.box'
+gpg: Signature made Thu 15 Mar 2018 10:29:23 AM CET
+gpg: using RSA key A621F1DA96C93C639506832D603443A1D0FC498C
+gpg: Good signature from "Flatcar Buildbot (Official Builds) " [ultimate]
+$ vagrant box add flatcar-alpha flatcar_production_vagrant.box
+==> box: Box file was not detected as metadata. Adding it directly...
+==> box: Adding box 'flatcar-alpha' (v0) for provider:
+ box: Unpacking necessary files from: file:///tmp/flatcar_production_vagrant.box
+==> box: Successfully added box 'flatcar-alpha' (v0) for 'virtualbox'!
+$ vagrant init flatcar-alpha
+A `Vagrantfile` has been placed in this directory. You are now
+ready to `vagrant up` your first virtual environment! Please read
+the comments in the Vagrantfile as well as documentation on
+`vagrantup.com` for more information on using Vagrant.
+$ vagrant up
+Bringing machine 'default' up with 'virtualbox' provider...
+==> default: Importing base box 'flatcar-alpha'...
+==> default: Matching MAC address for NAT networking...
+==> default: Setting the name of the VM: vagrant_default_1520510346048_14823
+==> default: Clearing any previously set network interfaces...
+==> default: Preparing network interfaces based on configuration...
+ default: Adapter 1: nat
+==> default: Forwarding ports...
+ default: 22 (guest) => 2222 (host) (adapter 1)
+==> default: Running 'pre-boot' VM customizations...
+==> default: Booting VM...
+==> default: Waiting for machine to boot. This may take a few minutes...
+ default: SSH address: 127.0.0.1:2222
+ default: SSH username: core
+ default: SSH auth method: private key
+==> default: Machine booted and ready!
+$ vagrant ssh
+Last login: Thu Mar 15 17:02:25 UTC 2018 from 10.0.2.2 on ssh
+Flatcar Container Linux by Kinvolk alpha (1702.1.0)
+core@localhost ~ $
+```
+
+## Starting a cluster
+
+You can configure your Vagrant machine by having a `Vagrantfile` example file:
+
+```ruby
+ENV["TERM"] = "xterm-256color"
+ENV["LC_ALL"] = "en_US.UTF-8"
+
+Vagrant.require_version '>= 2.0.4'
+
+Vagrant.configure('2') do |config|
+ config.ssh.username = 'core'
+ config.ssh.insert_key = true
+ config.vm.box = 'flatcar-alpha'
+ config.vm.synced_folder '.', '/vagrant', disabled: true
+ config.vm.provider :virtualbox do |v|
+ v.check_guest_additions = false
+ v.functional_vboxsf = false
+ v.cpus = 2
+ v.memory = 2048
+ end
+ config.vm.define 'core-01' do |c|
+ end
+ config.vm.define 'core-02' do |c|
+ end
+ config.vm.define 'core-03' do |c|
+ end
+end
+```
+
+### Start machines using Vagrant's default VirtualBox provider
+
+Start the machine(s):
+
+```shell
+vagrant up
+```
+
+List the status of the running machines:
+
+```shell
+$ vagrant status
+Current machine states:
+
+core-01 running (virtualbox)
+core-02 running (virtualbox)
+core-03 running (virtualbox)
+
+This environment represents multiple VMs. The VMs are all listed
+above with their current state. For more information about a specific
+VM, run `vagrant status NAME`.
+```
+
+Connect to one of the machines:
+
+```shell
+vagrant ssh core-01 -- -A
+```
+
+### Start machines using Vagrant's VMware provider
+
+If you have purchased the [VMware Vagrant provider](http://www.vagrantup.com/vmware), run the following commands:
+
+```shell
+vagrant up --provider vmware_fusion
+vagrant ssh core-01 -- -A
+```
+
+## Single machine
+
+To start a single machine, we need to provide some config parameters in cloud-config format via the `user-data` file.
+
+Start the machine:
+
+```shell
+vagrant up
+```
+
+Connect to the machine:
+
+```shell
+vagrant ssh core-01 -- -A
+```
+
+### Start machine using Vagrant's VMware provider
+
+If you have purchased the [VMware Vagrant provider](http://www.vagrantup.com/vmware), run the following commands:
+
+```shell
+vagrant up --provider vmware_fusion
+vagrant ssh core-01 -- -A
+```
+
+## Shared folder setup
+
+Optionally, you can share a folder from your laptop into the virtual machine. This is useful for easily getting code and Dockerfiles into Flatcar Container Linux.
+
+```ini
+config.vm.synced_folder ".", "/home/core/share", id: "core", :nfs => true, :mount_options => ['nolock,vers=3,udp']
+```
+
+After a 'vagrant reload' you will be prompted for your local machine password.
+
+## New box versions
+
+Flatcar Container Linux is a rolling release distribution and versions that are out of date will automatically update. If you want to start from the most up to date version you will need to make sure that you have the latest box file of Flatcar Container Linux. You can do this using `vagrant box update` - or, simply remove the old box file and Vagrant will download the latest one the next time you `vagrant up`.
+
+```shell
+vagrant box remove flatcar-alpha vmware_fusion
+vagrant box remove flatcar-alpha virtualbox
+```
+
+If you'd like to download the box separately, you can download the URL contained in the Vagrantfile and add it manually:
+
+```shell
+vagrant box add flatcar-alpha
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide, learn about [CoreOS Container Linux clustering with Vagrant](https://coreos.com/blog/coreos-clustering-with-vagrant/), or dig into [more specific topics][doc-index].
+
+[flatcar-dev]: https://groups.google.com/forum/#!forum/flatcar-linux-dev
+[irc]: irc://irc.freenode.org:6667/#flatcar
+[quickstart]: ../
+[doc-index]: ../../
diff --git a/content/docs/latest/installing/vms/virtualbox.md b/content/docs/latest/installing/vms/virtualbox.md
new file mode 100644
index 00000000..84984bd1
--- /dev/null
+++ b/content/docs/latest/installing/vms/virtualbox.md
@@ -0,0 +1,134 @@
+---
+title: Running Flatcar Container Linux on VirtualBox
+title: Running on VirtualBox
+weight: 30
+aliases:
+ - ../../os/booting-on-virtualbox
+ - ../../cloud-providers/booting-on-virtualbox
+---
+
+_While we always welcome community contributions and fixes, please note that VirtualBox is not an officially supported platform at this time. (See the [platform overview](/#installing-flatcar).)_
+
+These instructions will walk you through running Flatcar Container Linux on Oracle VM VirtualBox.
+
+## Building the virtual disk
+
+There is a script that simplifies building the VDI image. It downloads a bare-metal image, verifies it with GPG, and converts that image to a VDI image.
+
+The script is located on [GitHub](https://github.com/flatcar/scripts/blob/main/contrib/create-coreos-vdi). The running host must support VirtualBox tools.
+
+As first step, you must download the script and make it executable.
+
+```shell
+wget https://raw.githubusercontent.com/flatcar/scripts/main/contrib/create-coreos-vdi
+chmod +x create-coreos-vdi
+```
+
+To run the script, you can specify a destination location and the Flatcar Container Linux version.
+
+```shell
+./create-coreos-vdi -d /data/VirtualBox/Templates
+```
+
+## Choose a channel
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature][update-strategies], although we don't recommend it. Read the [release notes][release-notes] for specific features and bug fixes.
+
+
+
+
+
+
The Alpha channel closely tracks master and is released frequently. The newest versions of system libraries and utilities will be available for testing. The current version is Flatcar Container Linux {{< param alpha_channel >}}.
+
Create a disk image from this channel by running:
+
+./create-coreos-vdi -V alpha
+
+
+
+
The Beta channel consists of promoted Alpha releases. The current version is Flatcar Container Linux {{< param beta_channel >}}.
+
Create a disk image from this channel by running:
+
+./create-coreos-vdi -V beta
+
+
+
+
The Stable channel should be used by production clusters. Versions of Flatcar Container Linux are battle-tested within the Beta and Alpha channels before being promoted. The current version is Flatcar Container Linux {{< param stable_channel >}}.
+
Create a disk image from this channel by running:
+
+./create-coreos-vdi -V stable
+
+
+
+
+
+After the script has finished successfully, the Flatcar Container Linux image will be available at the specified destination location or at the current location. The file name will be something like:
+
+```shell
+coreos_production_stable.vdi
+```
+
+## Creating a config-drive
+
+Cloud-config can be specified by attaching a [config-drive](https://github.com/flatcar/coreos-cloudinit/blob/master/Documentation/config-drive.md) with the label `config-2`. This is commonly done through whatever interface allows for attaching CD-ROMs or new drives.
+
+Note that the config-drive standard was originally an OpenStack feature, which is why you'll see strings containing `openstack`. This filepath needs to be retained, although Flatcar Container Linux supports config-drive on all platforms.
+
+For more information on customization that can be done with cloud-config, head on over to the [cloud-config guide](https://github.com/flatcar/coreos-cloudinit/blob/master/Documentation/cloud-config.md).
+
+You need a config-drive to configure at least one SSH key to access the virtual machine. If you are in hurry, you can create a basic config-drive with following steps:
+
+```shell
+wget https://raw.github.com/flatcar/scripts/main/contrib/create-basic-configdrive
+chmod +x create-basic-configdrive
+./create-basic-configdrive -H my_vm01 -S ~/.ssh/id_rsa.pub
+```
+
+An ISO file named `my_vm01.iso` will be created that will configure a virtual machine to accept your SSH key and set its name to my_vm01.
+
+## Deploying a new virtual machine on VirtualBox
+
+Use the built image as the base image. Clone that image for each new virtual machine and set the desired size.
+
+```shell
+VBoxManage clonehd coreos_production_stable.vdi my_vm01.vdi
+# Resize virtual disk to 10 GB
+VBoxManage modifyhd my_vm01.vdi --resize 10240
+```
+
+At boot time, the Flatcar Container Linux will detect that the volume size has changed and will resize the filesystem accordingly.
+
+Open VirtualBox Manager and go to Machine > New. Type the desired machine name and choose 'Linux' as the type and 'Linux 2.6 / 3.x (64 bit)' as the version.
+
+Next, choose the desired memory size; at least 2 GB for an optimal experience.
+
+Then, choose 'Use an existing virtual hard drive file' and find the new cloned image.
+
+Click on the 'Create' button to create the virtual machine.
+
+Next, go to the settings from the created virtual machine. Then click on the Storage tab and load the created config-drive into the CD/DVD drive.
+
+Click on the 'OK' button and the virtual machine will be ready to be started.
+
+## Logging in
+
+Networking can take a bit of time to come up under VirtualBox, and the IP is needed in order to connect to it. Press enter a few times at the login prompt to see an IP address pop up. If you see VirtualBox NAT IP 10.0.2.15, go to the virtual machine settings and click the Network tab then Port Forwarding. Add the rule "Host Port: 2222; Guest Port 22" then connect using the command `ssh core@localhost -p2222`.
+
+Now, login using your private SSH key.
+
+```shell
+ssh core@192.168.56.101
+```
+
+## Using Flatcar Container Linux
+
+Now that you have a machine booted it is time to play around. Check out the [Flatcar Container Linux Quickstart][quickstart] guide or dig into [more specific topics][doc-index].
+
+[update-strategies]: ../../setup/releases/update-strategies
+[release-notes]: https://flatcar-linux.org/releases
+[quickstart]: ../
+[doc-index]: ../../
+
diff --git a/content/docs/latest/migrating-from-coreos/_index.md b/content/docs/latest/migrating-from-coreos/_index.md
new file mode 100644
index 00000000..694d0091
--- /dev/null
+++ b/content/docs/latest/migrating-from-coreos/_index.md
@@ -0,0 +1,49 @@
+---
+title: Migration from CoreOS Container Linux
+linktitle: Migrating from CoreOS
+weight: 110
+aliases:
+ - os/migrate-from-container-linux
+---
+
+While Flatcar is compatible with CoreOS Container Linux there are some naming differences you need to be aware of.
+
+**NOTE:** See [Updating from CoreOS Container Linux](update-from-container-linux)
+for additional information on updating an existing cluster.
+
+## Installation
+
+_Optional:_ Instead of `coreos-installer` you should use `flatcar-installer`.
+
+## Kernel command line parameters
+
+_Optional:_ Instead of providing the `coreos.first_boot=1` argument via the boot loader you should provide `flatcar.first_boot=1`.
+This forces provisioning via Ignition even if the machine (image) was booted already before.
+
+_Optional:_ Instead of providing the `coreos.config.url=SOMEURL` argument via the boot loader you should provide `ignition.config.url=SOMEURL`
+to tell Ignition to download the configuration.
+The change to a more generic name was done upstream by the Ignition project. Version 0.33 still supports both names and we
+also do this via the analogous `flatcar.config.url` option but we encourage the generic name because future versions of Ignition
+will only support `ignition.config.url`.
+
+_Optional:_ Instead of providing the `coreos.oem.id=NAME` argument via the boot loader you should provide `flatcar.oem.id=NAME`.
+(A change to the more generic name `ignition.platform.id` was done upstream by the Afterburn project but is not part of Container Linux yet.)
+
+**Recover from or prevent errors with missing OEM settings (e.g., `coreos-metadata-sshkeys@core.service`):** While current releases handle both `coreos.oem` and `flatcar.oem` names, previous releases still required `flatcar.oem.…`.
+In that case you need to change the variables in the file `/usr/share/oem/grub.cfg` when you update from CoreOS Container Linux:
+
+```text
+# GRUB settings
+set oem_id="myoemvalue"
+set linux_append="$linux flatcar.oem.id=myoemvalue"
+```
+
+## Ignition configuration with QEMU
+
+_Optional:_ Instead of using `opt/com.coreos/config` in the `-fw_cfg` name-value argument pair for QEMU/KVM or libvirt you need to use `opt/org.flatcar-linux/config`.
+The value in the argument pair specifies the Ignition file to use.
+
+## Ignition configuration with VMware
+
+_Optional:_ Instead of `coreos.config.data` and `coreos.config.data.encoding` for the VMware `guestinfo.VARIABLE` command line options you should use `ignition.config.data` and `ignition.config.data.encoding`.
+Same as for the `ignition.config.url` kernel parameter this change was done upstream by the Ignition project.
diff --git a/content/docs/latest/migrating-from-coreos/update-from-container-linux.md b/content/docs/latest/migrating-from-coreos/update-from-container-linux.md
new file mode 100644
index 00000000..6a0a92ff
--- /dev/null
+++ b/content/docs/latest/migrating-from-coreos/update-from-container-linux.md
@@ -0,0 +1,112 @@
+---
+title: Updating from CoreOS Container Linux
+linktitle: Updating from CoreOS
+weight: 10
+aliases:
+ - ../os/update-from-container-linux
+---
+
+If you already have CoreOS Container Linux clusters and can't or don't want to freshly install Flatcar Container Linux, you can update to Flatcar Container Linux directly from CoreOS Container Linux by performing the following steps.
+
+**NOTE:** General differences when [migrating from CoreOS Container Linux][migrate-from-container-linux] also apply.
+
+
+## The migration script
+
+The [update-to-flatcar.sh](https://raw.githubusercontent.com/flatcar/flatcar-docs/main/update-to-flatcar.sh) script does all required steps for you:
+
+```shell
+# To be run on the node via SSH
+core@host ~ $ wget https://raw.githubusercontent.com/flatcar/flatcar-docs/main/update-to-flatcar.sh
+core@host ~ $ less update-to-flatcar.sh # Double check the content of the script
+core@host ~ $ chmod +x update-to-flatcar.sh
+core@host ~ $ ./update-to-flatcar.sh
+[…]
+Done, please reboot now
+core@host ~ $ sudo systemctl reboot
+```
+
+If it fails due to SSL connection issues from outdated certificates, you can also download the update payload of the latest Stable release through plain HTTP and use the `flatcar-update` script instead:
+
+```shell
+$ VER=$(curl -fsSL --insecure --ssl-no-revoke http://stable.release.flatcar-linux.net/amd64-usr/current/version.txt | grep FLATCAR_VERSION= | cut -d = -f 2)
+$ wget --no-check-certificate "http://update.release.flatcar-linux.net/amd64-usr/$VER/flatcar_production_update.gz"
+$ wget --no-check-certificate http://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-update
+$ less flatcar-update # Double check the content of the script
+$ chmod +x flatcar-update
+$ sudo ./flatcar-update --to-version "$VER" --to-payload flatcar_production_update.gz --force-flatcar-key
+```
+
+**Before you reboot, check that you migrated the variable names as written in [Migrating from CoreOS Container Linux](migrate-from-container-linux).**
+
+## Going back to CoreOS Container Linux
+
+You can also go the other way.
+
+### Manual rollback
+
+If you just updated to Flatcar (and haven't done any additional updates), CoreOS Container Linux will still be on your disk, you just need to roll back to the other partition.
+
+To do that, just use this command composition:
+
+```shell
+sudo cgpt prioritize "$(sudo cgpt find -t flatcar-usr | grep --invert-match "$(rootdev -s /usr)")"
+```
+
+Now you can reboot and you'll be back to CoreOS Container Linux.
+Remember to undo your changes in your `/etc/coreos/update.conf` after rolling back if you want to keep getting CoreOS Container Linux updates.
+
+For more information about manual rollbacks, check [Performing a manual rollback][manual-rollback].
+
+### Force an update to CoreOS Container Linux
+
+This procedure is similar to updating from CoreOS Container Linux to Flatcar Container Linux.
+You need to get CoreOS Container Linux's public key, point update_engine to CoreOS Container Linux's update server, and force an update.
+
+Get CoreOS Container Linux's public key:
+
+```shell
+curl -L -o /tmp/key https://raw.githubusercontent.com/coreos/coreos-overlay/master/coreos-base/coreos-au-key/files/official-v2.pub.pem
+```
+
+Bind-mount it:
+
+```shell
+sudo mount --bind /tmp/key /usr/share/update_engine/update-payload-key.pub.pem
+```
+
+Create an `/etc/flatcar` directory and copy the current update configuration:
+
+```shell
+sudo mkdir -p /etc/flatcar
+sudo cp /etc/coreos/update.conf /etc/flatcar/
+```
+
+Change the `SERVER` field in `/etc/flatcar/update.conf`:
+
+```shell
+SERVER=https://public.update.core-os.net/v1/update/
+```
+
+Bind-mount the release file:
+
+```shell
+cp /usr/share/flatcar/release /tmp
+sudo mount --bind /tmp/release /usr/share/flatcar/release
+```
+
+Edit `FLATCAR_RELEASE_VERSION` to force an update:
+
+```shell
+FLATCAR_RELEASE_VERSION=0.0.0
+```
+
+After that, restart the update service so it rescans the edited configuration and initiates an update.
+The system will reboot into CoreOS Container Linux:
+
+```shell
+sudo update_engine_client -update
+```
+
+[migrate-from-container-linux]: _index.md
+[manual-rollback]: ../setup/debug/manual-rollbacks/#performing-a-manual-rollback
diff --git a/content/docs/latest/provisioning/_index.md b/content/docs/latest/provisioning/_index.md
new file mode 100644
index 00000000..28e57afd
--- /dev/null
+++ b/content/docs/latest/provisioning/_index.md
@@ -0,0 +1,9 @@
+---
+title: Provisioning Tools
+description: >
+ Several different tools can be used to automate the provisioning of
+ Flatcar Container Linux images. These guides can help you understand what
+ each of the tools do, as well as provide plenty of examples of how to use
+ them.
+weight: 30
+---
diff --git a/content/docs/latest/provisioning/cl-config/_index.md b/content/docs/latest/provisioning/cl-config/_index.md
new file mode 100644
index 00000000..01db0bc0
--- /dev/null
+++ b/content/docs/latest/provisioning/cl-config/_index.md
@@ -0,0 +1,153 @@
+---
+title: Container Linux Config Transpiler
+linktitle: Container Linux Config Transpiler
+description: YAML configuration format used to generate Ignition configs.
+weight: 20
+aliases:
+ - ../os/provisioning
+ - ../reference/migrating-to-clcs/provisioning
+---
+
+Flatcar Container Linux automates machine provisioning with a specialized system for applying initial configuration. This system implements a process of (trans)compilation and validation for machine configs, and an atomic service to apply validated configurations to machines.
+
+## Container Linux Config
+
+Flatcar Container Linux admins define these configurations in a format called the [Container Linux Config][clc], which was originally designed for CoreOS Container Linux, but works perfectly well with Flatcar Container Linux. Container Linux Configs are structured as YAML, and intended to be human-readable. The Container Linux Config has features devoted to configuring Flatcar Container Linux services such as [etcd][etcd], [rkt][rkt], Docker, [flannel][flannel], and [locksmith][locksmith]. **The defining feature of the config is that it cannot be sent directly to a Flatcar Container Linux provisioning target**. Instead, it is first validated and transformed into a machine-readable and wire-efficient form.
+
+The following examples demonstrate the simplicity of the Container Linux Config format.
+
+This extremely simple Container Linux Config will fetch and run the current release of etcd:
+
+```yaml
+etcd: {}
+```
+
+Extend the definition to specify the version of etcd to run. The following example will provision a new Flatcar Container Linux machine to fetch and run the etcd service, version 3.1.6:
+
+```yaml
+etcd:
+ version: 3.1.6
+```
+
+Use variable replacement to configure the etcd service with the provisioning target's public and private IPv4 addresses, making it repeatable across a group of machines.
+
+```yaml
+etcd:
+ advertise_client_urls: http://{PUBLIC_IPV4}:2379
+ initial_advertise_peer_urls: http://{PRIVATE_IPV4}:2380
+ listen_client_urls: http://0.0.0.0:2379
+ listen_peer_urls: http://{PRIVATE_IPV4}:2380
+ discovery: https://discovery.etcd.io/
+```
+
+`PUBLIC_IPV4` and `PRIVATE_IPV4` are automatically populated from the environment in which Flatcar Container Linux runs, if this metadata exists. Given the many different environments in which Flatcar Container Linux can run, it's difficult if not impossible to accurately determine these variables in every instance. Be certain to check this value as a troubleshooting measure.
+
+For example, the default metadata for an EC2 environment would be used: `public_ipv4` and `local_ipv4`. On Azure, *either* the virtual IP or public IP could be used for the `PUBLIC_IPV4` (`ct` makes a best guess and uses the virtual IP, but this could change in the future), and the dynamic IP would be used for the `PRIVATE_IPV4`. On bare metal, this information cannot be reliably derived in a general manner, so these variables cannot be used.
+
+Because variable expansion is unpredictable and complex, and because it is also common for users to inadvertently write invalid configs, the use of a transformation tool is strongly encouraged. The default tool recommended for this task is the [Config Transpiler][ct] (ct for short). The Config Transpiler will validate and transform a Container Linux Config into the format that Flatcar Container Linux can consume: the Ignition Config.
+
+## Ignition Config
+
+Ignition, the utility in Flatcar Container Linux responsible for provisioning the machine, fetches and executes the Ignition Config. Flatcar Container Linux directly consumes the Ignition Config configuration format.
+
+Ignition Configs are mostly static, distro-agnostic, and meant to be generated by a machine rather than a human. While they can be written directly by users, it is highly discouraged due to the ease with which errors may be introduced. Rather than writing Ignition Configs directly, users are encouraged to use provisioning tools like [Matchbox][matchbox], which transparently translate Container Linux Configs to Ignition Configs, or to use the Config Transpiler itself.
+
+![visual overview of the alternate ct workflows](../../img/ct-workflow.svg)
+
+As shown in this diagram, `ct` is manually invoked only when users are manually provisioning machines. If a provisioning tool like Matchbox is used, `ct` will transparently be incorporated into the deployment pipeline. In which case, the user only needs to prepare a Container Linux Config - Ignition and the Ignition Config are merely an implementation detail.
+
+## Config Transpiler
+
+The Container Linux Config Transpiler abstracts the details of configuring Flatcar Container Linux. It's responsible for transforming a Container Linux Config written by a user into an Ignition Config to be consumed by instances of Flatcar Container Linux.
+
+The Container Linux Config Transpiler command line interface, `ct` for short, can be downloaded from its [GitHub Releases page][download-ct] or used via Docker (`cat example.yaml | docker run --rm -i ghcr.io/flatcar/ct:latest --platform=YOURPLATFORM`).
+
+The following config will configure an etcd cluster using the machine's public and private IP addresses:
+
+```yaml
+etcd:
+ advertise_client_urls: http://{PUBLIC_IPV4}:2379
+ initial_advertise_peer_urls: http://{PRIVATE_IPV4}:2380
+ listen_client_urls: http://0.0.0.0:2379
+ listen_peer_urls: http://{PRIVATE_IPV4}:2380
+ discovery: https://discovery.etcd.io/
+```
+
+As suggested earlier, `ct` requires information about the target environment before it can transform configs which use templating. If this config is passed to `ct` without any other arguments, `ct` fails with the following error message:
+
+```shell
+$ ct < example.yml
+error: platform must be specified to use templating
+```
+
+This message states that because the config takes advantage of templating (in this case, `PUBLIC_IPV4`), `ct` must be invoked with the `--platform` argument. This extra information is used by `ct` to make the platform-specific customizations necessary. Keeping the Container Linux Config and the invocation arguments separate allows the Container Linux Config to remain largely platform independent.
+
+CT can be invoked again and given Amazon EC2 as an example:
+
+```shell
+$ ct --platform=ec2 < example.yml
+{"ignition":{"version":"2.0.0","config"...
+```
+
+This time, `ct` successfully runs and produces the following Ignition Config:
+
+```json
+{
+ "ignition": { "version": "2.0.0" },
+ "systemd": {
+ "units": [{
+ "name": "etcd-member.service",
+ "enable": true,
+ "dropins": [{
+ "name": "20-clct-etcd-member.conf",
+ "contents": "[Unit]\nRequires=coreos-metadata.service\nAfter=coreos-metadata.service\n\n[Service]\nEnvironmentFile=/run/metadata/coreos\nExecStart=\nExecStart=/usr/lib/flatcar/etcd-wrapper $ETCD_OPTS \\\n --listen-peer-urls=\"http://${COREOS_EC2_IPV4_LOCAL}:2380\" \\\n --listen-client-urls=\"http://0.0.0.0:2379\" \\\n --initial-advertise-peer-urls=\"http://${COREOS_EC2_IPV4_LOCAL}:2380\" \\\n --advertise-client-urls=\"http://${COREOS_EC2_IPV4_PUBLIC}:2379\" \\\n --discovery=\"https://discovery.etcd.io/\u003ctoken\u003e\""
+ }]
+ }]
+ }
+}
+```
+
+This Ignition Config enables and configures etcd as specified in the above Container Linux Config. This can be more easily seen if the contents of the etcd drop-in are formatted nicely:
+
+```ini
+[Unit]
+Requires=coreos-metadata.service
+After=coreos-metadata.service
+
+[Service]
+EnvironmentFile=/run/metadata/coreos
+ExecStart=
+ExecStart=/usr/lib/flatcar/etcd-wrapper $ETCD_OPTS \
+ --listen-peer-urls="http://${COREOS_EC2_IPV4_LOCAL}:2380" \
+ --listen-client-urls="http://0.0.0.0:2379" \
+ --initial-advertise-peer-urls="http://${COREOS_EC2_IPV4_LOCAL}:2380" \
+ --advertise-client-urls="http://${COREOS_EC2_IPV4_PUBLIC}:2379" \
+ --discovery="https://discovery.etcd.io/"
+```
+
+The details of these changes are covered in depth in Ignition's [metadata documentation][metadata], but the gist is that `coreos-metadata` is used to fetch the IP addresses from the Amazon APIs and then `systemd` is leveraged to substitute the IP addresses into the invocation of etcd. The result is that even though Ignition only runs once, `coreos-metadata` fetches the IP addresses whenever etcd is run, allowing etcd to use IP addresses that have the potential to change.
+
+## Migrating from cloud configs
+
+Previously, the recommended way to provision a Flatcar Container Linux machine was with a cloud-config. These configs would be given to a Flatcar Container Linux machine and a utility called [coreos-cloudinit][cloudinit] would read this file and apply the configuration on every boot.
+
+For a [number of reasons][vs], coreos-cloudinit has been deprecated in favor of Container Linux Configs and Ignition. For help migrating from these legacy cloud-configs to Container Linux Configs, refer to the [migration guide][migrating].
+
+## Using Container Linux Configs
+
+Now that the basics of Container Linux Configs have been covered, a good next step is to read through the [examples][examples] and start experimenting. The [troubleshooting guide][troubleshooting] is a good reference for debugging issues.
+
+[clc]: ../config-transpiler/configuration
+[cloudinit]: https://github.com/kinvolk/coreos-cloudinit
+[ct]: ../config-transpiler/
+[download-ct]: https://github.com/flatcar/container-linux-config-transpiler/releases
+[etcd]: https://github.com/etcd-io/etcd
+[examples]: examples
+[flannel]: https://github.com/coreos/flannel
+[locksmith]: https://github.com/kinvolk/locksmith
+[matchbox]: https://github.com/coreos/matchbox
+[metadata]: ../ignition/metadata
+[migrating]: from-cloud-config
+[rkt]: https://github.com/rkt/rkt
+[troubleshooting]: https://github.com/kinvolk/ignition/blob/master/doc/getting-started.md#troubleshooting
+[vs]: ../ignition/#ignition-vs-coreos-cloudinit
diff --git a/content/docs/latest/provisioning/cl-config/dynamic-data.md b/content/docs/latest/provisioning/cl-config/dynamic-data.md
new file mode 100644
index 00000000..fe7774a3
--- /dev/null
+++ b/content/docs/latest/provisioning/cl-config/dynamic-data.md
@@ -0,0 +1,123 @@
+---
+title: Referencing dynamic data
+weight: 40
+aliases:
+ - ../../container-linux-config-transpiler/doc/dynamic-data
+ - ../../container-linux-config-transpiler/dynamic-data
+---
+
+## Overview
+
+Sometimes it can be useful to refer to data in a Container Linux Config that isn't known until a machine boots, like its network address. This can be accomplished with [afterburn][afterburn] (previously called `coreos-metadata`). Afterburn is a very basic utility that fetches information about the current machine and makes it available for consumption. By making it a dependency of services which requires this information, systemd will ensure that coreos-metadata has successfully completed before starting these services. These services can then simply source the fetched information and let systemd perform the environment variable expansions.
+
+While the `coreos-metadata.service` runs afterburn, it will not set the hostname. The hostname is set either through an OEM agent or for particular platforms through afterburn in the initramfs. If afterburn supports your platform and is not invoked in the initramfs by default, you can run it later to set the hostname (`--hostname=/etc/hostname`).
+
+As of version 0.2.0, ct has support for making this easy for users. In specific sections of a config, users can enter in dynamic data between `{}`, and ct will handle enabling the coreos-metadata service and using the information it provides.
+
+The available information varies by provider, and is expressed in different variables by coreos-metadata. If this feature is used a `--provider` flag must be passed to ct. Currently, the `etcd` and `flannel` sections are the only ones which support this feature.
+
+[afterburn]: https://github.com/coreos/afterburn/
+
+## Supported data by provider
+
+This is the information available in each provider.
+
+| | `HOSTNAME` | `PRIVATE_IPV4` | `PUBLIC_IPV4` | `PRIVATE_IPV6` | `PUBLIC_IPV6` |
+|--------------------|------------|----------------|---------------|----------------|---------------|
+| Azure | | ✓ | ✓ | | |
+| Digital Ocean | ✓ | ✓ | ✓ | ✓ | ✓ |
+| EC2 | ✓ | ✓ | ✓ | | |
+| GCE | ✓ | ✓ | ✓ | | |
+| Packet | ✓ | ✓ | ✓ | | ✓ |
+| OpenStack-Metadata | ✓ | ✓ | ✓ | | |
+| Vagrant-Virtualbox | ✓ | ✓ | | | |
+
+## Custom metadata providers
+
+`ct` also supports custom metadata providers. To use the `custom` platform, create a coreos-metadata service unit to execute your own custom metadata fetcher. The custom metadata fetcher must write an environment file `/run/metadata/coreos` defining a `COREOS_CUSTOM_*` environment variable for every piece of dynamic data used in the specified Container Linux Config. The environment variables are the same as in the Container Linux Config, but prefixed with `COREOS_CUSTOM_`.
+
+### Example
+
+Assume `https://example.com/metadata-script.sh` is a script which communicates with a metadata service and then writes the following file to `/run/metadata/coreos`:
+```
+COREOS_CUSTOM_HOSTNAME=foobar
+COREOS_CUSTOM_PRIVATE_IPV4=
+COREOS_CUSTOM_PUBLIC_IPV4=
+```
+
+The following Container Linux Config downloads the metadata fetching script, replaces the ExecStart line in `coreos-metadata` service to use the script instead, and configures etcd using the metadata provided. Use the `--platform=custom` flag when transpiling.
+```yaml
+storage:
+ files:
+ - filesystem: "root"
+ path: "/opt/get-metadata.sh"
+ mode: 0755
+ contents:
+ remote:
+ url: "https://example.com/metadata-script.sh"
+
+systemd:
+ units:
+ - name: "coreos-metadata.service"
+ contents: |
+ [Unit]
+ Description=Metadata agent
+ After=nss-lookup.target
+ After=network-online.target
+ Wants=network-online.target
+ [Service]
+ Type=oneshot
+ Restart=on-failure
+ RemainAfterExit=yes
+ ExecStart=/opt/get-metadata.sh
+
+etcd:
+ version: "3.0.15"
+ name: "{HOSTNAME}"
+ advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
+ initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ listen_client_urls: "http://0.0.0.0:2379"
+ listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ initial_cluster: "{HOSTNAME}=http://{PRIVATE_IPV4}:2380"
+```
+
+You can find another example in the [VMware docs](../../installing/cloud/vmware.md).
+
+## Behind the scenes
+
+For a more in-depth walk through of how this feature works, let's look at the etcd example from the [examples document][examples].
+
+```yaml
+etcd:
+ version: "3.0.15"
+ name: "{HOSTNAME}"
+ advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
+ initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ listen_client_urls: "http://0.0.0.0:2379"
+ listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ initial_cluster: "{HOSTNAME}=http://{PRIVATE_IPV4}:2380"
+```
+
+If we give this example to ct with the `--platform=ec2` tag, it produces the following drop-in:
+
+```
+[Unit]
+Requires=coreos-metadata.service
+After=coreos-metadata.service
+
+[Service]
+EnvironmentFile=/run/metadata/coreos
+Environment="ETCD_IMAGE_TAG=v3.0.15"
+ExecStart=
+ExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \
+ --name="${COREOS_EC2_HOSTNAME}" \
+ --listen-peer-urls="http://${COREOS_EC2_IPV4_LOCAL}:2380" \
+ --listen-client-urls="http://0.0.0.0:2379" \
+ --initial-advertise-peer-urls="http://${COREOS_EC2_IPV4_LOCAL}:2380" \
+ --initial-cluster="${COREOS_EC2_HOSTNAME}=http://${COREOS_EC2_IPV4_LOCAL}:2380" \
+ --advertise-client-urls="http://${COREOS_EC2_IPV4_LOCAL}:2379"
+```
+
+This drop-in specifies that etcd should run after the coreos-metadata service, and it uses `/run/metadata/coreos` as an `EnvironmentFile`. This enables the coreos-metadata service, and puts the information it discovers into environment variables. These environment variables are then expanded by systemd when the service starts, inserting the dynamic data into the command-line flags to etcd.
+
+[examples]: #example
diff --git a/content/docs/latest/provisioning/cl-config/examples.md b/content/docs/latest/provisioning/cl-config/examples.md
new file mode 100644
index 00000000..fbd75e66
--- /dev/null
+++ b/content/docs/latest/provisioning/cl-config/examples.md
@@ -0,0 +1,214 @@
+---
+title: Container Linux Config Examples
+linktitle: Examples
+weight: 20
+aliases:
+ - ../../container-linux-config-transpiler/doc/examples
+ - ../../container-linux-config-transpiler/examples
+---
+
+Here you can find a bunch of simple examples for using Container Linux configs, with some explanations about what they do. The examples here are in no way comprehensive, for a full list of all the available fields check out the [config-transpiler specification][spec].
+
+## Users and groups
+
+```yaml
+passwd:
+ users:
+ - name: core
+ password_hash: "$6$43y3tkl..."
+ ssh_authorized_keys:
+ - key1
+```
+
+This example modifies the existing `core` user, giving it a known password hash (this will enable login via password), and setting its ssh key.
+
+```yaml
+passwd:
+ users:
+ - name: user1
+ password_hash: "$6$43y3tkl..."
+ ssh_authorized_keys:
+ - key1
+ - key2
+ - name: user2
+ ssh_authorized_keys:
+ - key3
+```
+
+This example will create two users, `user1` and `user2`. The first user has a password set and two ssh public keys authorized to log in as the user. The second user doesn't have a password set (so log in via password will be disabled), but have one ssh key.
+
+```yaml
+passwd:
+ users:
+ - name: user1
+ password_hash: "$6$43y3tkl..."
+ ssh_authorized_keys:
+ - key1
+ home_dir: /home/user1
+ no_create_home: true
+ groups:
+ - wheel
+ - plugdev
+ shell: /bin/bash
+```
+
+This example creates one user, `user1`, with the password hash `$6$43y3tkl...`, and sets up one ssh public key for the user. The user is also given the home directory `/home/user1`, but it's not created, the user is added to the `wheel` and `plugdev` groups, and the user's shell is set to `/bin/bash`.
+
+### Generating a password hash
+
+If you choose to use a password instead of an SSH key, generating a safe hash is extremely important to the security of your system. Simplified hashes like md5crypt are trivial to crack on modern GPU hardware. Here are a few ways to generate secure hashes:
+
+```
+# On Debian/Ubuntu (via the package "whois")
+mkpasswd --method=SHA-512 --rounds=4096
+
+# OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
+openssl passwd -1
+
+# Python
+python -c "import crypt,random,string; print(crypt.crypt(input('clear-text password: '), '\$6\$' + ''.join([random.choice(string.ascii_letters + string.digits) for _ in range(16)])))"
+
+# Perl (change password and salt values)
+perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
+```
+
+Using a higher number of rounds will help create more secure passwords, but given enough time, password hashes can be reversed. On most RPM based distributions there is a tool called mkpasswd available in the `expect` package, but this does not handle "rounds" nor advanced hashing algorithms.
+
+## Storage and files
+
+### Files
+
+```yaml
+storage:
+ files:
+ - path: /opt/file
+ filesystem: root
+ contents:
+ inline: Hello, world!
+ mode: 0644
+ user:
+ id: 500
+ group:
+ id: 501
+```
+
+This example creates a file at `/opt/file` with the contents `Hello, world!`, permissions 0644 (so readable and writable by the owner, and only readable by everyone else), and the file is owned by user uid 500 and gid 501.
+
+```yaml
+storage:
+ files:
+ - path: /opt/file2
+ filesystem: root
+ contents:
+ remote:
+ url: http://example.com/file2
+ compression: gzip
+ verification:
+ hash:
+ function: sha512
+ sum: 4ee6a9d20cc0e6c7ee187daffa6822bdef7f4cebe109eff44b235f97e45dc3d7a5bb932efc841192e46618f48a6f4f5bc0d15fd74b1038abf46bf4b4fd409f2e
+ mode: 0644
+```
+
+This example fetches a gzip-compressed file from `http://example.com/file2`, makes sure that it matches the provided sha512 hash, and writes it to `/opt/file2`.
+
+### Filesystems
+
+```yaml
+storage:
+ filesystems:
+ - name: filesystem1
+ mount:
+ device: /dev/disk/by-partlabel/ROOT
+ format: btrfs
+ wipe_filesystem: true
+ label: ROOT
+```
+
+This example formats the root filesystem to be `btrfs`, and names it `filesystem1` (primarily for use in the `files` section).
+
+## systemd units
+
+```yaml
+systemd:
+ units:
+ - name: etcd-member.service
+ dropins:
+ - name: conf1.conf
+ contents: |
+ [Service]
+ Environment="ETCD_NAME=infra0"
+```
+
+This example adds a drop-in for the `etcd-member` unit, setting the name for etcd to `infra0` with an environment variable. More information on systemd dropins can be found in [the docs][dropins].
+
+```yaml
+systemd:
+ units:
+ - name: hello.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=A hello world unit!
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/echo "Hello, World!"
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+This example creates a new systemd unit called hello.service, enables it so it will run on boot, and defines the contents to simply echo `"Hello, World!"`.
+
+## networkd units
+
+```yaml
+networkd:
+ units:
+ - name: static.network
+ contents: |
+ [Match]
+ Name=enp2s0
+
+ [Network]
+ Address=192.168.0.15/24
+ Gateway=192.168.0.1
+```
+
+This example creates a networkd unit to set the IP address on the `enp2s0` interface to the static address `192.168.0.15/24`, and sets an appropriate gateway. More information on networkd units in CoreOS can be found in [the docs][networkd].
+
+## etcd
+
+```yaml
+etcd:
+ version: "3.0.15"
+ name: "{HOSTNAME}"
+ advertise_client_urls: "http://{PRIVATE_IPV4}:2379"
+ initial_advertise_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ listen_client_urls: "http://0.0.0.0:2379"
+ listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ initial_cluster: "{HOSTNAME}=http://{PRIVATE_IPV4}:2380"
+```
+
+This example will create a dropin for the `etcd-member` systemd unit, configuring it to use the specified version and adding all the specified options. This will also enable the `etcd-member` unit.
+
+This is referencing dynamic data that isn't known until an instance is booted. For more information on how this works, please take a look at the [referencing dynamic data][dynamic-data] document.
+
+## Updates and Locksmithd
+
+```yaml
+update:
+ group: "beta"
+locksmith:
+ reboot_strategy: "etcd-lock"
+ window_start: "Sun 1:00"
+ window_length: "2h"
+```
+
+This example configures the Container Linux instance to be a member of the beta group, configures locksmithd to acquire a lock in etcd before rebooting for an update, and only allows reboots during a 2 hour window starting at 1 AM on Sundays.
+
+[spec]: ../config-transpiler/configuration
+[dropins]: ../../setup/systemd/drop-in-units
+[networkd]: ../../setup/customization/network-config-with-networkd
+[dynamic-data]: ../config-transpiler/dynamic-data
diff --git a/content/docs/latest/provisioning/cl-config/from-cloud-config.md b/content/docs/latest/provisioning/cl-config/from-cloud-config.md
new file mode 100644
index 00000000..5b8578ca
--- /dev/null
+++ b/content/docs/latest/provisioning/cl-config/from-cloud-config.md
@@ -0,0 +1,349 @@
+---
+title: Migrating from cloud-config to Container Linux Config
+linktitle: Migrating from cloud-config
+weight: 40
+aliases:
+ - ../../os/migrating-to-clcs
+ - ../../reference/migrating-to-clcs
+ - migrating-to-clcs
+---
+
+Flatcar Container Linux started as a fork of CoreOS Container Linux. Historically, the recommended way to provision a CoreOS Container Linux machine was with a cloud-config. This was a YAML file specifying things like systemd units to run, users that should exist, and files that should be written. This file would be given to a CoreOS Container Linux machine, and saved on disk. Then a utility called coreos-cloudinit running in a systemd unit would read this file, look at the system state, and make necessary changes on every boot.
+
+The current recommended method is provisioning with Container Linux Configs.
+
+This document details how to convert an existing cloud-config into a Container Linux Config. Once a Container Linux Config has been written, it is given to the Config Transpiler to be converted into an Ignition Config. This Ignition Config can then be provided to a booting machine. For more information on this process, take a look at the [provisioning guide][provisioning].
+
+The etcd and flannel examples shown in this document will use dynamic data in the Container Linux Config (anything looking like this: `{PRIVATE_IPV4}`). Not all types of dynamic data are supported on all cloud providers, and if the machine is not on a cloud provider this feature cannot be used. Please see [here][dynamic-data] for more information.
+
+To see all supported options available in a Container Linux Config, please look at the [Container Linux Config schema][ct-config].
+
+## etcd2
+
+In a cloud-config, etcd version 2 can be enabled and configured by using the `coreos.etcd2.*` section. As an example of this:
+
+```yaml
+#cloud-config
+
+coreos:
+ etcd2:
+ discovery: "https://discovery.etcd.io/"
+ advertise-client-urls: "http://$public_ipv4:2379"
+ initial-advertise-peer-urls: "http://$private_ipv4:2380"
+ listen-client-urls: "http://0.0.0.0:2379,http://0.0.0.0:4001"
+ listen-peer-urls: "http://$private_ipv4:2380,http://$private_ipv4:7001"
+```
+
+etcd can be configured in a more general way with a Container Linux Config. This CL Config will use the etcd-member.service systemd unit rather than the etcd2 service understood by cloud-config and coreos-cloudinit. The etcd-member service will download a version of etcd of the user's choosing and run it. This means that in a Container Linux Config both etcd v2 and v3 can be configured.
+
+This is done under the etcd section:
+
+```yaml
+etcd:
+ version: 3.1.6
+```
+
+Omitting the version specification declares that the unit file should use the version of etcd matching the running version of Flatcar Container Linux.
+
+Configuration options in this section can be provided the same way as they were in a cloud-config, with the exception of dashes (`-`) being replaced with underscores (`_`) in key names.
+
+```yaml
+etcd:
+ name: "{HOSTNAME}"
+ advertise_client_urls: "{PRIVATE_IPV4}:2379"
+ initial_advertise_peer_urls: "{PRIVATE_IPV4}:2380"
+ listen_client_urls: "http://0.0.0.0:2379"
+ listen_peer_urls: "http://{PRIVATE_IPV4}:2380"
+ initial_cluster: "%m=http://{PRIVATE_IPV4}:2380"
+```
+
+## flannel
+
+Flannel is easily configurable in a cloud-config the same way etcd is, by using the `coreos.flannel.*` section.
+
+```yaml
+#cloud-config
+
+coreos:
+ flannel:
+ etcd_prefix: "/coreos.com/network2"
+```
+
+The flannel section in a Container Linux Config is used the same way, and a version can optionally be specified for flannel as well.
+
+```yaml
+flannel:
+ version: 0.7.0
+ etcd_prefix: "/coreos.com/network2"
+```
+
+## locksmith
+
+The `coreos.locksmith.*` section in a cloud-config can be used to configure the locksmith daemon via environment variables.
+
+```yaml
+#cloud-config
+
+coreos:
+ locksmith:
+ endpoint: "http://example.com:2379"
+```
+
+Locksmith can be configured in the same way under the locksmith section of a Container Linux Config, but some of the accepted options are slightly different. Also the reboot strategy is set in the locksmith section, instead of the update section. Check out the [Container Linux Config schema][ct-config] to see what options are available.
+
+```yaml
+locksmith:
+ reboot_strategy: "reboot"
+ etcd_endpoints: "http://example.com:2379"
+```
+
+## update
+
+The `coreos.update.*` section can be used to configure the reboot strategy, update group, and update server in a cloud-config.
+
+```yaml
+#cloud-config
+coreos:
+ update:
+ reboot-strategy: "etcd-lock"
+ group: "stable"
+ server: "https://public.update.flatcar-linux.net/v1/update/"
+```
+
+In the update section in a Container Linux Config the group and server can be configured, but the reboot-strategy option has been moved under the locksmith section.
+
+```yaml
+update:
+ group: "stable"
+ server: "https://public.update.flatcar-linux.net/v1/update/"
+```
+
+## units
+
+The `coreos.units.*` section in a cloud-config can define arbitrary systemd units that should be started after booting.
+
+```yaml
+#cloud-config
+
+coreos:
+ units:
+ - name: "docker-redis.service"
+ command: "start"
+ content: |
+ [Unit]
+ Description=Redis container
+ Author=Me
+ After=docker.service
+
+ [Service]
+ Restart=always
+ ExecStart=/usr/bin/docker start -a redis_server
+ ExecStop=/usr/bin/docker stop -t 2 redis_server
+```
+
+This section could also be used to define systemd drop-in files for existing units.
+
+```yaml
+#cloud-config
+
+coreos:
+ units:
+ - name: "docker.service"
+ drop-ins:
+ - name: "50-insecure-registry.conf"
+ content: |
+ [Service]
+ Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
+```
+
+And existing units could also be started without any further configuration.
+
+```yaml
+#cloud-config
+
+coreos:
+ units:
+ - name: "etcd2.service"
+ command: "start"
+```
+
+One big difference in Container Linux Config compared to cloud-configs is that the configuration is applied via [Ignition][ignition] before the machine has fully booted, as opposed to coreos-cloudinit that runs after the machine has fully booted. As a result units cannot be directly started in a Container Linux Config, the unit is instead enabled so that systemd will begin the unit once systemd starts.
+
+_Note: in this example an `[Install]` section has been added so that the unit can be enabled._
+
+```yaml
+systemd:
+ units:
+ - name: "docker-redis.service"
+ enable: true
+ contents: |
+ [Unit]
+ Description=Redis container
+ Author=Me
+ After=docker.service
+
+ [Service]
+ Restart=always
+ ExecStart=/usr/bin/docker start -a redis_server
+ ExecStop=/usr/bin/docker stop -t 2 redis_server
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+Drop-in files can be provided for units in a Container Linux Config just like in a cloud-config.
+
+```yaml
+systemd:
+ units:
+ - name: "docker.service"
+ dropins:
+ - name: "50-insecure-registry.conf"
+ contents: |
+ [Service]
+ Environment=DOCKER_OPTS='--insecure-registry="10.0.1.0/24"'
+```
+
+Existing units can also be enabled without configuration.
+
+```yaml
+systemd:
+ units:
+ - name: "etcd-member.service"
+ enable: true
+```
+
+### ssh_authorized_keys
+
+In a cloud-config the `ssh_authorized_keys` section can be used to add ssh public keys to the `core` user.
+
+```yaml
+#cloud-config
+
+ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
+```
+
+In a Container Linux Config there is no analogous section to `ssh_authorized_keys`, but ssh keys for the core user can be set just as easily using the `passwd.users.*` section:
+
+```yaml
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
+```
+
+### hostname
+
+In a cloud-config the `hostname` section can be used to set a machine's hostname.
+
+```yaml
+#cloud-config
+
+hostname: "coreos1"
+```
+
+The Container Linux Config is intentionally more generalized than a cloud-config, and there is no equivalent hostname section understood in a CL Config. Instead, set the hostname by writing it to `/etc/hostname` in a CL Config `storage.files.*` section.
+
+```yaml
+storage:
+ files:
+ - filesystem: "root"
+ path: "/etc/hostname"
+ mode: 0644
+ contents:
+ inline: coreos1
+```
+
+### users
+
+The `users` section in a cloud-config can be used to add users and specify many properties about them, from groups the user should be in to what the user's shell should be.
+
+```yaml
+#cloud-config
+
+users:
+ - name: "elroy"
+ passwd: "$6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm..."
+ groups:
+ - "sudo"
+ - "docker"
+ ssh-authorized-keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
+```
+
+This same information can be added to the Container Linux Config in the `passwd.users.*` section.
+
+```yaml
+passwd:
+ users:
+ - name: "elroy"
+ password_hash: "$6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm..."
+ ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0g+ZTxC7weoIJLUafOgrm+h..."
+ groups:
+ - "sudo"
+ - "docker"
+```
+
+### write_files
+
+The `write_files` section in a cloud-config can be used to specify files and their contents that should be written to disk on the machine.
+
+```yaml
+#cloud-config
+write_files:
+ - path: "/etc/resolv.conf"
+ permissions: "0644"
+ owner: "root"
+ content: |
+ nameserver 8.8.8.8
+```
+
+This can be done in a Container Linux Config with the `storage.files.*` section.
+
+```yaml
+storage:
+ files:
+ - filesystem: "root"
+ path: "/etc/resolv.conf"
+ mode: 0644
+ contents:
+ inline: |
+ nameserver 8.8.8.8
+```
+
+File specifications in this section of a CL Config must define the target filesystem and the file's path relative to the root of that filesystem. This allows files to be written to filesystems other than the root filesystem.
+
+Under the `contents` section, the file contents are under a sub-section called `inline`. This is because a file's contents can be remote by replacing the `inline` section with a `remote` section. To see what options are available under the `remote` section, look at the [Container Linux Config schema][ct-config].
+
+### manage_etc_hosts
+
+The `manage_etcd_hosts` section in a cloud-config can be used to configure the contents of the `/etc/hosts` file. Currently only one value is supported, `"localhost"`, which will cause your system's hostname to resolve to `127.0.0.1`.
+
+```yaml
+#cloud-config
+
+manage_etc_hosts: "localhost"
+```
+
+There is no analogous section in a Container Linux Config, however the `/etc/hosts` file can be written in the `storage.files.*` section.
+
+```yaml
+storage:
+ files:
+ - filesystem: "root"
+ path: "/etc/hosts"
+ mode: 0644
+ contents:
+ inline: |
+ 127.0.0.1 localhost
+ ::1 localhost
+ 127.0.0.1 example.com
+```
+
+[provisioning]: _index.md
+[dynamic-data]: ../config-transpiler/dynamic-data
+[ct-config]: ../config-transpiler/configuration
+[ignition]: ../ignition
diff --git a/content/docs/latest/provisioning/cl-config/operators-notes.md b/content/docs/latest/provisioning/cl-config/operators-notes.md
new file mode 100644
index 00000000..818b782a
--- /dev/null
+++ b/content/docs/latest/provisioning/cl-config/operators-notes.md
@@ -0,0 +1,20 @@
+---
+title: Operator Notes
+weight: 70
+aliases:
+ - ../../container-linux-config-transpiler/doc/operators-notes
+ - ../../container-linux-config-transpiler/operators-notes
+---
+
+## Type GUID aliases
+
+The Config Transpiler supports several aliases for GPT partition type GUIDs. They are as follows:
+
+| Alias Name | Resolved Type GUID |
+| -- | -- |
+| `raid_containing_root` | `be9067b9-ea49-4f15-b4f6-f36f8c9e1818` |
+| `linux_filesystem_data` | `0fc63daf-8483-4772-8e79-3d69d8477de4` |
+| `swap_partition` | `0657fd6d-a4ab-43c4-84e5-0933c84b4f4f` |
+| `raid_partition` | `a19d880f-05fc-4d3b-a006-743f0f84911e` |
+
+See the [Using RAID for the Root Filesystem](../../setup/storage/raid/) documentation for when to use `raid_containing_root`.
diff --git a/content/docs/latest/provisioning/cl-config/specification.md b/content/docs/latest/provisioning/cl-config/specification.md
new file mode 100644
index 00000000..c1a1940c
--- /dev/null
+++ b/content/docs/latest/provisioning/cl-config/specification.md
@@ -0,0 +1,185 @@
+---
+title: CL Configuration Specification
+weight: 80
+aliases:
+ - ../../container-linux-config-transpiler/doc/configuration
+ - ../../container-linux-config-transpiler/configuration
+---
+
+A Container Linux Configuration, to be processed by `ct`, is a YAML document conforming to the following specification:
+
+_Note: all fields are optional unless otherwise marked_
+
+* **ignition** (object): metadata about the configuration itself.
+ * **config** (objects): options related to the configuration.
+ * **append** (list of objects): a list of the configs to be appended to the current config.
+ * **source** (string, required): the URL of the config. Supported schemes are http, https, s3, tftp, and [data][rfc2397]. Note: When using http, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **verification** (object): options related to the verification of the config.
+ * **hash** (object): the hash of the config
+ * **function** (string): the function used to hash the config. Supported functions are sha512.
+ * **sum** (string): the resulting sum of the hash applied to the contents.
+ * **replace** (object): the config that will replace the current.
+ * **source** (string, required): the URL of the config. Supported schemes are http, https, s3, tftp, and [data][rfc2397]. Note: When using http, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **verification** (object): options related to the verification of the config.
+ * **hash** (object): the hash of the config
+ * **function** (string): the function used to hash the config. Supported functions are sha512.
+ * **sum** (string): the resulting sum of the hash applied to the contents.
+ * **timeouts** (object): options relating to http timeouts when fetching files over http or https.
+ * **http_response_headers** (integer): the time to wait (in seconds) for the server's response headers (but not the body) after making a request. 0 indicates no timeout. Default is 10 seconds.
+ * **http_total** (integer): the time limit (in seconds) for the operation (connection, request, and response), including retries. 0 indicates no timeout. Default is 0.
+ * **security** (object): options relating to network security.
+ * **tls** (object): options relating to TLS when fetching resources over `https`.
+ * **certificate_authorities** (object): the list of additional certificate authorities (in addition to the system authorities) to be used for TLS verification when fetching over `https`.
+ * **source** (string, required): the URL of the certificate (in PEM format). Supported schemes are `http`, `https`, `s3`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **verification** (object): options related to the verification of the certificate.
+ * **hash** (string): the hash of the certificate, in the form `-` where type is sha512.
+* **storage** (object): describes the desired state of the system's storage devices.
+ * **disks** (list of objects): the list of disks to be configured and their options.
+ * **device** (string, required): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **wipe_table** (boolean): whether or not the partition tables shall be wiped. When true, the partition tables are erased before any further manipulation. Otherwise, the existing entries are left intact.
+ * **partitions** (list of objects): the list of partitions and their configuration for this particular disk.
+ * **label** (string): the PARTLABEL for the partition.
+ * **number** (integer): the partition number, which dictates it's position in the partition table (one-indexed). If zero, use the next available partition slot.
+ * **size** (string): the size of the partition with a unit (KiB, MiB, GiB). If zero, the partition will fill the remainder of the disk.
+ * **start** (string): the start of the partition with a unit (KiB, MiB, GiB). If zero, the partition will be positioned at the earliest available part of the disk.
+ * **type_guid** (string): the GPT [partition type GUID][part-types]. If omitted, the default will be 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem data). The keywords `linux_filesystem_data`, `raid_partition`, `swap_partition`, and `raid_containing_root` can also be used.
+ * **guid** (string): the GPT unique partition GUID.
+ * **raid** (list of objects): the list of RAID arrays to be configured.
+ * **name** (string, required): the name to use for the resulting md device.
+ * **level** (string, required): the redundancy level of the array (e.g. linear, raid1, raid5, etc.).
+ * **devices** (list of strings, required): the list of devices (referenced by their absolute path) in the array.
+ * **spares** (integer): the number of spares (if applicable) in the array.
+ * **options** (list of strings): any additional options to be passed to mdadm.
+ * **filesystems** (list of objects): the list of filesystems to be configured and/or used in the "files" section. Either "mount" or "path" needs to be specified.
+ * **name** (string): the identifier for the filesystem, internal to Ignition. This is only required if the filesystem needs to be referenced in the "files" section.
+ * **mount** (object): contains the set of mount and formatting options for the filesystem. A non-null entry indicates that the filesystem should be mounted before it is used by Ignition.
+ * **device** (string, required): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **format** (string, required): the filesystem format (ext4, btrfs, or xfs).
+ * **wipe_filesystem** (boolean): whether or not to wipe the device before filesystem creation, see [Ignition's documentation on filesystems][ignition-fs-reuse] for more information.
+ * **label** (string): the label of the filesystem.
+ * **uuid** (string): the uuid of the filesystem.
+ * **options** (list of strings): any additional options to be passed to the format-specific mkfs utility.
+ * **create** (object, DEPRECATED): contains the set of options to be used when creating the filesystem. A non-null entry indicates that the filesystem shall be created.
+ * **force** (boolean, DEPRECATED): whether or not the create operation shall overwrite an existing filesystem.
+ * **options** (list of strings, DEPRECATED): any additional options to be passed to the format-specific mkfs utility.
+ * **path** (string): the mount-point of the filesystem. A non-null entry indicates that the filesystem has already been mounted by the system at the specified path. This is really only useful for "/sysroot".
+ * **files** (list of objects): the list of files, rooted in this particular filesystem, to be written.
+ * **filesystem** (string, required): the internal identifier of the filesystem. This matches the last filesystem with the given identifier.
+ * **path** (string, required): the absolute path to the file.
+ * **overwrite** (boolean): whether to delete preexisting nodes at the path. Defaults to true.
+ * **append** (boolean): whether to append to the specified file. Creates a new file if nothing exists at the path. Cannot be set if overwrite is set to true.
+ * **contents** (object): options related to the contents of the file.
+ * **inline** (string): the contents of the file.
+ * **local** (string): the path to a local file, relative to the `--files-dir` directory. When using local files, the `--files-dir` flag must be passed to `ct`. The file contents are included in the generated config.
+ * **remote** (object): options related to the fetching of remote file contents. Remote files are fetched by Ignition when Ignition runs, the contents are not included in the generated config.
+ * **compression** (string): the type of compression used on the contents (null or gzip)
+ * **url** (string): the URL of the file contents. Supported schemes are http, https, tftp, s3, and [data][rfc2397]. Note: When using http, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **verification** (object): options related to the verification of the file contents.
+ * **hash** (object): the hash of the config
+ * **function** (string): the function used to hash the config. Supported functions are sha512.
+ * **sum** (string): the resulting sum of the hash applied to the contents.
+ * **mode** (integer): the file's permission mode.
+ * **user** (object): specifies the file's owner.
+ * **id** (integer): the user ID of the owner.
+ * **name** (string): the user name of the owner.
+ * **group** (object): specifies the group of the owner.
+ * **id** (integer): the group ID of the owner.
+ * **name** (string): the group name of the owner.
+ * **directories** (list of objects): the list of directories to be created.
+ * **filesystem** (string, required): the internal identifier of the filesystem in which to create the directory. This matches the last filesystem with the given identifier.
+ * **path** (string, required): the absolute path to the directory.
+ * **overwrite** (boolean): whether to delete preexisting nodes at the path.
+ * **mode** (integer): the directory's permission mode.
+ * **user** (object): specifies the directory's owner.
+ * **id** (integer): the user ID of the owner.
+ * **name** (string): the user name of the owner.
+ * **group** (object): specifies the group of the owner.
+ * **id** (integer): the group ID of the owner.
+ * **name** (string): the group name of the owner.
+ * **links** (list of objects): the list of links to be created
+ * **filesystem** (string, required): the internal identifier of the filesystem in which to write the link. This matches the last filesystem with the given identifier.
+ * **path** (string, required): the absolute path to the link
+ * **overwrite** (boolean): whether to delete preexisting nodes at the path.
+ * **user** (object): specifies the symbolic link's owner.
+ * **id** (integer): the user ID of the owner.
+ * **name** (string): the user name of the owner.
+ * **group** (object): specifies the group of the owner.
+ * **id** (integer): the group ID of the owner.
+ * **name** (string): the group name of the owner.
+ * **target** (string, required): the target path of the link
+ * **hard** (boolean): a symbolic link is created if this is false, a hard one if this is true.
+* **systemd** (object): describes the desired state of the systemd units.
+ * **units** (list of objects): the list of systemd units.
+ * **name** (string, required): the name of the unit. This must be suffixed with a valid unit type (e.g. "thing.service").
+ * **enable** (boolean, DEPRECATED): whether or not the service shall be enabled. When true, the service is enabled. In order for this to have any effect, the unit must have an install section.
+ * **enabled** (boolean): whether or not the service shall be enabled. When true, the service is enabled. When false, the service is disabled. When omitted, the service is unmodified. In order for this to have any effect, the unit must have an install section.
+ * **mask** (boolean): whether or not the service shall be masked. When true, the service is masked by symlinking it to `/dev/null`.
+ * **contents** (string): the contents of the unit.
+ * **dropins** (list of objects): the list of drop-ins for the unit.
+ * **name** (string, required): the name of the drop-in. This must be suffixed with ".conf".
+ * **contents** (string): the contents of the drop-in.
+* **networkd** (object): describes the desired state of the networkd files.
+ * **units** (list of objects): the list of networkd files.
+ * **name** (string, required): the name of the file. This must be suffixed with a valid unit type (e.g. "00-eth0.network").
+ * **contents** (string): the contents of the networkd file.
+ * **dropins** (list of objects): the list of drop-ins for the unit.
+ * **name** (string, required): the name of the drop-in. This must be suffixed with ".conf".
+ * **contents** (string): the contents of the drop-in.
+* **passwd** (object): describes the desired additions to the passwd database.
+ * **users** (list of objects): the list of accounts that shall exist.
+ * **name** (string, required): the username for the account.
+ * **password_hash** (string): the encrypted password for the account.
+ * **ssh_authorized_keys** (list of strings): a list of SSH keys to be added to the user's authorized_keys.
+ * **uid** (integer): the user ID of the account.
+ * **gecos** (string): the GECOS field of the account.
+ * **home_dir** (string): the home directory of the account.
+ * **no_create_home** (boolean): whether or not to create the user's home directory. This only has an effect if the account doesn't exist yet.
+ * **primary_group** (string): the name of the primary group of the account.
+ * **groups** (list of strings): the list of supplementary groups of the account.
+ * **no_user_group** (boolean): whether or not to create a group with the same name as the user. This only has an effect if the account doesn't exist yet.
+ * **no_log_init** (boolean): whether or not to add the user to the lastlog and faillog databases. This only has an effect if the account doesn't exist yet.
+ * **shell** (string): the login shell of the new account.
+ * **system** (bool): whether or not to make the account a system account. This only has an effect if the account doesn't exist yet.
+ * **create** (object, DEPRECATED): contains the set of options to be used when creating the user. A non-null entry indicates that the user account shall be created.
+ * **uid** (integer, DEPRECATED): the user ID of the new account.
+ * **gecos** (string, DEPRECATED): the GECOS field of the new account.
+ * **home_dir** (string, DEPRECATED): the home directory of the new account.
+ * **no_create_home** (boolean, DEPRECATED): whether or not to create the user's home directory.
+ * **primary_group** (string, DEPRECATED): the name or ID of the primary group of the new account.
+ * **groups** (list of strings, DEPRECATED): the list of supplementary groups of the new account.
+ * **no_user_group** (boolean, DEPRECATED): whether or not to create a group with the same name as the user.
+ * **no_log_init** (boolean, DEPRECATED): whether or not to add the user to the lastlog and faillog databases.
+ * **shell** (string, DEPRECATED): the login shell of the new account.
+ * **groups** (list of objects): the list of groups to be added.
+ * **name** (string, required): the name of the group.
+ * **gid** (integer): the group ID of the new group.
+ * **password_hash** (string): the encrypted password of the new group.
+* **etcd**
+ * **version** (string): the version of etcd to be run
+ * **_other options_** (string): this section accepts any valid etcd options for the version of etcd specified. For a comprehensive list, please consult etcd's documentation. Note all options here should be in snake_case, not spine-case.
+* **flannel**
+ * **version** (string): the version of flannel to be run
+ * **network_config** (string): the flannel configuration to be written into etcd before flannel starts.
+ * **_other options_** (string): this section accepts any valid flannel options for the version of flannel specified. For a comprehensive list, please consult flannel's documentation. Note all options here should be in snake_case, not spine-case.
+* **docker**
+ * **flags** (list of strings): additional flags to pass to the docker daemon when it is started
+* **update**
+ * **group** (string): the update group to follow. Most users will want one of: stable, beta, alpha.
+ * **server** (string): the server to fetch updates from.
+ * **pcr_policy_server** (string): the server to receive posted TPM PCR policy from.
+ * **download_user** (string): the authentication user to fetch the update.
+ * **download_password** (string): the authentication password to fetch the update
+ * **machine_alias** (string): human readable machine alias to be displayed in the update server UI.
+* **locksmith**
+ * **reboot_strategy** (string): the reboot strategy for locksmithd to follow. Must be one of: reboot, etcd-lock, off.
+ * **window_start** (string, required if window-length isn't empty): the start of the window that locksmithd can reboot the machine during
+ * **window_length** (string, required if window-start isn't empty): the duration of the window that locksmithd can reboot the machine during
+ * **group** (string): the locksmith etcd group to be part of for reboot control
+ * **etcd_endpoints** (string): the endpoints of etcd locksmith should use
+ * **etcd_cafile** (string): the tls CA file to use when communicating with etcd
+ * **etcd_certfile** (string): the tls cert file to use when communicating with etcd
+ * **etcd_keyfile** (string): the tls key file to use when communicating with etcd
+
+[part-types]: http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
+[rfc2397]: https://tools.ietf.org/html/rfc2397
+[ignition-fs-reuse]: https://github.com/coreos/ignition/blob/main/docs/operator-notes.md#filesystem-reuse-semantics
diff --git a/content/docs/latest/provisioning/config-transpiler/_index.md b/content/docs/latest/provisioning/config-transpiler/_index.md
new file mode 100644
index 00000000..bd18f08e
--- /dev/null
+++ b/content/docs/latest/provisioning/config-transpiler/_index.md
@@ -0,0 +1,80 @@
+---
+content_type: butane
+title: Butane Config Transpiler
+linktitle: Butane Config Transpiler
+description: Transforms Butane files into Ignition configuration
+main_menu: true
+weight: 30
+aliases:
+ - ../container-linux-config-transpiler/doc/overview
+ - ../container-linux-config-transpiler
+---
+
+Butane is the utility responsible for transforming a user-provided Butane Configuration into an [Ignition][ignition] configuration. The resulting Ignition config can then be provided to a Container Linux machine when it first boots in order to provision it.
+
+The Butane Config is intended to be human-friendly, and is thus in YAML. The syntax is rather forgiving, and things like references and multi-line strings are supported.
+
+The resulting Ignition config is very much not intended to be human-friendly. It is an artifact produced by butane that users should simply pass along to their machines. JSON was chosen over a binary format to make the process more transparent and to allow power users to inspect/modify what butane produces, but it would have worked fine if the result from butane had not been human readable at all.
+
+[butane]: https://github.com/coreos/butane/
+[ignition]: https://github.com/coreos/ignition
+
+**Note:**: Butane is utilized to generate Ignition v3+ configurations. If you are still utilizing a version of Container Linux that requires Ignition v2, you can refer to the [Container Linux Config Transpiler][cl-config] documentation. This particularly applies to those using the current LTS releases.
+
+## Why a two-step process?
+
+There are a couple factors motivating the decision to not incorporate support for Butane Configs directly into the boot process of Container Linux (as in, the ability to provide a Butane Config directly to a booting machine, instead of an Ignition config).
+
+- By making users run their configs through butane before they attempt to boot a machine, issues with their configs can be caught before any machine attempts to boot. This will save users time, as they can much more quickly find problems with their configs. Were users to provide Butane Configs directly to machines at first boot, they would need to find a way to extract the Ignition logs from a machine that may have failed to boot, which can be a slow and tedious process.
+- YAML parsing is a complex process that in the past has been rather error-prone. By only doing JSON parsing in the boot path, we can guarantee that the utilities necessary for a machine to boot are simpler and more reliable. We want to allow users to use YAML however, as it's much more human-friendly than JSON, hence the decision to have a tool separate from the boot path to "transpile" YAML configurations to machine-appropriate JSON ones.
+
+## Tell me more about Ignition
+
+[Ignition][ignition] is the utility inside of a Container Linux image that is responsible for setting up a machine. It takes in a configuration, written in JSON, that instructs it to do things like add users, format disks, and install systemd units. The artifacts that butane produces are Ignition configs. All of this should be an implementation detail however, users are encouraged to write Butane Configs for butane, and to simply pass along the produced JSON file to their machines.
+
+## How similar are Butane Configs and Ignition configs?
+
+Some features in Butane Configs and Ignition configs are identical. Both support listing users for creation, systemd unit dropins for installation, and files for writing.
+
+All of the differences stem from the fact that Ignition configs are distribution agnostic. An Ignition config can't just tell Ignition to enable etcd, because Ignition doesn't know what etcd is. The config must tell Ignition what systemd unit to enable, and provide a systemd dropin to configure etcd.
+
+Butane on the other hand _does_ understand the specifics of Flatcar Container Linux. A user currently can't specify Clevis options on Flatcar and Butane does these sanity checks for the user.
+
+## Example Butane Config
+
+The following small example of a Butane Config will ensure that the default core user exists and adds a specified public key
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB......xyz email@host.net
+```
+
+To turn this Butane Config into a usable Ignition Config, we can then run: `docker run --rm -i quay.io/coreos/butane:latest < your_config.yaml > your_config.json`. This will result in the above YAML being turned into the below JSON
+
+```json
+{
+ "ignition": {
+ "version": "3.3.0"
+ },
+ "passwd": {
+ "users": [
+ {
+ "name": "core",
+ "sshAuthorizedKeys": [
+ "ssh-rsa AAAAB......xyz email@host.net"
+ ]
+ }
+ ]
+ }
+}
+```
+
+To learn more about Butane and the configurations that are available, you can refer to the latest [Butane Spec][butane-spec].
+
+[butane-spec]: https://coreos.github.io/butane
+[cl-config]: ../cl-config
diff --git a/content/docs/latest/provisioning/config-transpiler/configuration.md b/content/docs/latest/provisioning/config-transpiler/configuration.md
new file mode 100644
index 00000000..3f0985e6
--- /dev/null
+++ b/content/docs/latest/provisioning/config-transpiler/configuration.md
@@ -0,0 +1,189 @@
+---
+title: Butane Configuration Specification
+weight: 80
+---
+
+The Butane Flatcar variant configuration is a YAML document conforming to the following specification, with **_italicized_** entries being optional:
+
+* **variant** (string): used to differentiate configs for different operating systems. Must be `flatcar` for this specification.
+* **version** (string): the semantic version of the spec for this document. This document is for version `1.0.0` and generates Ignition configs with version `3.3.0`.
+* **_ignition_** (object): metadata about the configuration itself.
+ * **_config_** (objects): options related to the configuration.
+ * **_merge_** (list of objects): a list of the configs to be merged to the current config.
+ * **_source_** (string): the URL of the config. Supported schemes are `http`, `https`, `s3`, `gs`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. Mutually exclusive with `inline` and `local`.
+ * **_inline_** (string): the contents of the config. Mutually exclusive with `source` and `local`.
+ * **_local_** (string): a local path to the contents of the config, relative to the directory specified by the `--files-dir` command-line argument. Mutually exclusive with `source` and `inline`.
+ * **_compression_** (string): the type of compression used on the config (null or gzip). Compression cannot be used with S3.
+ * **_http_headers_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the config.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_replace_** (object): the config that will replace the current.
+ * **_source_** (string): the URL of the config. Supported schemes are `http`, `https`, `s3`, `gs`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. Mutually exclusive with `inline` and `local`.
+ * **_inline_** (string): the contents of the config. Mutually exclusive with `source` and `local`.
+ * **_local_** (string): a local path to the contents of the config, relative to the directory specified by the `--files-dir` command-line argument. Mutually exclusive with `source` and `inline`.
+ * **_compression_** (string): the type of compression used on the config (null or gzip). Compression cannot be used with S3.
+ * **_http_headers_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the config.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_timeouts_** (object): options relating to `http` timeouts when fetching files over `http` or `https`.
+ * **_http_response_headers_** (integer) the time to wait (in seconds) for the server's response headers (but not the body) after making a request. 0 indicates no timeout. Default is 10 seconds.
+ * **_http_total_** (integer) the time limit (in seconds) for the operation (connection, request, and response), including retries. 0 indicates no timeout. Default is 0.
+ * **_security_** (object): options relating to network security.
+ * **_tls_** (object): options relating to TLS when fetching resources over `https`.
+ * **_certificate_authorities_** (list of objects): the list of additional certificate authorities (in addition to the system authorities) to be used for TLS verification when fetching over `https`. All certificate authorities must have a unique `source`, `inline`, or `local`.
+ * **_source_** (string): the URL of the certificate bundle (in PEM format). With Ignition ≥ 2.4.0, the bundle can contain multiple concatenated certificates. Supported schemes are `http`, `https`, `s3`, `gs`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. Mutually exclusive with `inline` and `local`.
+ * **_inline_** (string): the contents of the certificate bundle (in PEM format). With Ignition ≥ 2.4.0, the bundle can contain multiple concatenated certificates. Mutually exclusive with `source` and `local`.
+ * **_local_** (string): a local path to the contents of the certificate bundle (in PEM format), relative to the directory specified by the `--files-dir` command-line argument. With Ignition ≥ 2.4.0, the bundle can contain multiple concatenated certificates. Mutually exclusive with `source` and `inline`.
+ * **_compression_** (string): the type of compression used on the certificate (null or gzip). Compression cannot be used with S3.
+ * **_http_headers_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the certificate.
+ * **_hash_** (string): the hash of the certificate, in the form `-` where type is either `sha512` or `sha256`.
+ * **_proxy_** (object): options relating to setting an `HTTP(S)` proxy when fetching resources.
+ * **_http_proxy_** (string): will be used as the proxy URL for HTTP requests and HTTPS requests unless overridden by `https_proxy` or `no_proxy`.
+ * **_https_proxy_** (string): will be used as the proxy URL for HTTPS requests unless overridden by `no_proxy`.
+ * **_no_proxy_** (list of strings): specifies a list of strings to hosts that should be excluded from proxying. Each value is represented by an `IP address prefix (1.2.3.4)`, `an IP address prefix in CIDR notation (1.2.3.4/8)`, `a domain name`, or `a special DNS label (*)`. An IP address prefix and domain name can also include a literal port number `(1.2.3.4:80)`. A domain name matches that name and all subdomains. A domain name with a leading `.` matches subdomains only. For example `foo.com` matches `foo.com` and `bar.foo.com`; `.y.com` matches `x.y.com` but not `y.com`. A single asterisk `(*)` indicates that no proxying should be done.
+* **_storage_** (object): describes the desired state of the system's storage devices.
+ * **_disks_** (list of objects): the list of disks to be configured and their options. Every entry must have a unique `device`.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **_wipe_table_** (boolean): whether or not the partition tables shall be wiped. When true, the partition tables are erased before any further manipulation. Otherwise, the existing entries are left intact.
+ * **_partitions_** (list of objects): the list of partitions and their configuration for this particular disk. Every partition must have a unique `number`, or if 0 is specified, a unique `label`.
+ * **_label_** (string): the PARTLABEL for the partition.
+ * **_number_** (integer): the partition number, which dictates its position in the partition table (one-indexed). If zero, use the next available partition slot.
+ * **_size_mib_** (integer): the size of the partition (in mebibytes). If zero, the partition will be made as large as possible.
+ * **_start_mib_** (integer): the start of the partition (in mebibytes). If zero, the partition will be positioned at the start of the largest block available.
+ * **_type_guid_** (string): the GPT [partition type GUID][part-types]. If omitted, the default will be 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem data).
+ * **_guid_** (string): the GPT unique partition GUID.
+ * **_wipe_partition_entry_** (boolean) if true, Ignition will clobber an existing partition if it does not match the config. If false (default), Ignition will fail instead.
+ * **_should_exist_** (boolean) whether or not the partition with the specified `number` should exist. If omitted, it defaults to true. If false Ignition will either delete the specified partition or fail, depending on `wipePartitionEntry`. If false `number` must be specified and non-zero and `label`, `start`, `size`, `guid`, and `typeGuid` must all be omitted.
+ * **_resize_** (boolean) whether or not the existing partition should be resized. If omitted, it defaults to false. If true, Ignition will resize an existing partition if it matches the config in all respects except the partition size.
+ * **_raid_** (list of objects): the list of RAID arrays to be configured. Every RAID array must have a unique `name`.
+ * **name** (string): the name to use for the resulting md device.
+ * **level** (string): the redundancy level of the array (e.g. linear, raid1, raid5, etc.).
+ * **devices** (list of strings): the list of devices (referenced by their absolute path) in the array.
+ * **_spares_** (integer): the number of spares (if applicable) in the array.
+ * **_options_** (list of strings): any additional options to be passed to mdadm.
+ * **_filesystems_** (list of objects): the list of filesystems to be configured. `device` and `format` need to be specified. Every filesystem must have a unique `device`.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **format** (string): the filesystem format (ext4, btrfs, xfs, vfat, swap, or none).
+ * **_path_** (string): the mount-point of the filesystem while Ignition is running relative to where the root filesystem will be mounted. This is not necessarily the same as where it should be mounted in the real root, but it is encouraged to make it the same.
+ * **_wipe_filesystem_** (boolean): whether or not to wipe the device before filesystem creation, see [the documentation on filesystems](https://coreos.github.io/ignition/operator-notes/#filesystem-reuse-semantics) for more information. Defaults to false.
+ * **_label_** (string): the label of the filesystem.
+ * **_uuid_** (string): the uuid of the filesystem.
+ * **_options_** (list of strings): any additional options to be passed to the format-specific mkfs utility.
+ * **_mount_options_** (list of strings): any special options to be passed to the mount command.
+ * **_with_mount_unit_** (bool): whether to additionally generate a generic mount unit for this filesystem or a swap unit for this swap area. If a more specific unit is needed, a custom one can be specified in the `systemd.units` section. The unit will be named with the [escaped][systemd-escape] version of the `path` or `device`, depending on the unit type. If your filesystem is located on a Tang-backed LUKS device, the unit will automatically require network access if you specify the device as `/dev/mapper/` or `/dev/disk/by-id/dm-name-`.
+ * **_files_** (list of objects): the list of files to be written. Every file, directory and link must have a unique `path`.
+ * **path** (string): the absolute path to the file.
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. `contents` must be specified if `overwrite` is true. Defaults to false.
+ * **_contents_** (object): options related to the contents of the file.
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the file contents. Supported schemes are `http`, `https`, `tftp`, `s3`, `gs`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. If source is omitted and a regular file already exists at the path, Ignition will do nothing. If source is omitted and no file exists, an empty file will be created. Mutually exclusive with `inline` and `local`.
+ * **_inline_** (string): the contents of the file. Mutually exclusive with `source` and `local`.
+ * **_local_** (string): a local path to the contents of the file, relative to the directory specified by the `--files-dir` command-line argument. Mutually exclusive with `source` and `inline`.
+ * **_http_headers_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the file contents.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_append_** (list of objects): list of contents to be appended to the file. Follows the same stucture as `contents`
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the contents to append. Supported schemes are `http`, `https`, `tftp`, `s3`, `gs`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. Mutually exclusive with `inline` and `local`.
+ * **_inline_** (string): the contents to append. Mutually exclusive with `source` and `local`.
+ * **_local_** (string): a local path to the contents to append, relative to the directory specified by the `--files-dir` command-line argument. Mutually exclusive with `source` and `inline`.
+ * **_http_headers_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the appended contents.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_mode_** (integer): the file's permission mode. Setuid/setgid/sticky bits are not supported. If not specified, the permission mode for files defaults to 0644 or the existing file's permissions if `overwrite` is false, `contents` is unspecified, and a file already exists at the path.
+ * **_user_** (object): specifies the file's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the file's group.
+ * **_id_** (integer): the group ID of the group.
+ * **_name_** (string): the group name of the group.
+ * **_directories_** (list of objects): the list of directories to be created. Every file, directory, and link must have a unique `path`.
+ * **path** (string): the absolute path to the directory.
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. If false and a directory already exists at the path, Ignition will only set its permissions. If false and a non-directory exists at that path, Ignition will fail. Defaults to false.
+ * **_mode_** (integer): the directory's permission mode. Setuid/setgid/sticky bits are not supported. If not specified, the permission mode for directories defaults to 0755 or the mode of an existing directory if `overwrite` is false and a directory already exists at the path.
+ * **_user_** (object): specifies the directory's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the directory's group.
+ * **_id_** (integer): the group ID of the group.
+ * **_name_** (string): the group name of the group.
+ * **_links_** (list of objects): the list of links to be created. Every file, directory, and link must have a unique `path`.
+ * **path** (string): the absolute path to the link
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. If overwrite is false and a matching link exists at the path, Ignition will only set the owner and group. Defaults to false.
+ * **_user_** (object): specifies the owner for a symbolic link. Ignored for hard links.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group for a symbolic link. Ignored for hard links.
+ * **_id_** (integer): the group ID of the group.
+ * **_name_** (string): the group name of the group.
+ * **target** (string): the target path of the link
+ * **_hard_** (boolean): a symbolic link is created if this is false, a hard one if this is true.
+ * **_luks_** (list of objects): the list of luks devices to be created. Every device must have a unique `name`.
+ * **name** (string): the name of the luks device.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **_key_file_** (string): options related to the contents of the key file.
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the key file contents. Supported schemes are `http`, `https`, `tftp`, `s3`, `gs`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. Mutually exclusive with `inline` and `local`.
+ * **_inline_** (string): the contents of the key file. Mutually exclusive with `source` and `local`.
+ * **_local_** (string): a local path to the contents of the key file, relative to the directory specified by the `--files-dir` command-line argument. Mutually exclusive with `source` and `inline`.
+ * **_http_headers_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the file contents.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_label_** (string): the label of the luks device.
+ * **_uuid_** (string): the uuid of the luks device.
+ * **_options_** (list of strings): any additional options to be passed to the cryptsetup utility.
+ * **_wipe_volume_** (boolean): whether or not to wipe the device before volume creation, see [the Ignition documentation on filesystems](https://coreos.github.io/ignition/operator-notes/#filesystem-reuse-semantics) for more information.
+ * **_trees_** (list of objects): a list of local directory trees to be embedded in the config. Ownership is not preserved. File modes are set to 0755 if the local file is executable or 0644 otherwise. Attributes of files, directories, and symlinks can be overridden by creating a corresponding entry in the `files`, `directories`, or `links` section; such `files` entries must omit `contents` and such `links` entries must omit `target`.
+ * **local** (string): the base of the local directory tree, relative to the directory specified by the `--files-dir` command-line argument.
+ * **_path_** (string): the path of the tree within the target system. Defaults to `/`.
+* **_systemd_** (object): describes the desired state of the systemd units.
+ * **_units_** (list of objects): the list of systemd units. Every unit must have a unique `name`.
+ * **name** (string): the name of the unit. This must be suffixed with a valid unit type (e.g. "thing.service").
+ * **_enabled_** (boolean): whether or not the service shall be enabled. When true, the service is enabled. When false, the service is disabled. When omitted, the service is unmodified. In order for this to have any effect, the unit must have an install section.
+ * **_mask_** (boolean): whether or not the service shall be masked. When true, the service is masked by symlinking it to `/dev/null`.
+ * **_contents_** (string): the contents of the unit.
+ * **_dropins_** (list of objects): the list of drop-ins for the unit. Every drop-in must have a unique `name`.
+ * **name** (string): the name of the drop-in. This must be suffixed with ".conf".
+ * **_contents_** (string): the contents of the drop-in.
+* **_passwd_** (object): describes the desired additions to the passwd database.
+ * **_users_** (list of objects): the list of accounts that shall exist. All users must have a unique `name`.
+ * **name** (string): the username for the account.
+ * **_password_hash_** (string): the hashed password for the account.
+ * **_ssh_authorized_keys_** (list of strings): a list of SSH keys to be added as an SSH key fragment at `.ssh/authorized_keys.d/ignition` in the user's home directory. All SSH keys must be unique.
+ * **_uid_** (integer): the user ID of the account.
+ * **_gecos_** (string): the GECOS field of the account.
+ * **_home_dir_** (string): the home directory of the account.
+ * **_no_create_home_** (boolean): whether or not to create the user's home directory. This only has an effect if the account doesn't exist yet.
+ * **_primary_group_** (string): the name of the primary group of the account.
+ * **_groups_** (list of strings): the list of supplementary groups of the account.
+ * **_no_user_group_** (boolean): whether or not to create a group with the same name as the user. This only has an effect if the account doesn't exist yet.
+ * **_no_log_init_** (boolean): whether or not to add the user to the lastlog and faillog databases. This only has an effect if the account doesn't exist yet.
+ * **_shell_** (string): the login shell of the new account.
+ * **_should_exist_** (boolean) whether or not the user with the specified `name` should exist. If omitted, it defaults to true. If false, then Ignition will delete the specified user.
+ * **_system_** (bool): whether or not this account should be a system account. This only has an effect if the account doesn't exist yet.
+ * **_groups_** (list of objects): the list of groups to be added. All groups must have a unique `name`.
+ * **name** (string): the name of the group.
+ * **_gid_** (integer): the group ID of the new group.
+ * **_password_hash_** (string): the hashed password of the new group.
+ * **_should_exist_** (boolean) whether or not the group with the specified `name` should exist. If omitted, it defaults to true. If false, then Ignition will delete the specified group.
+ * **_system_** (bool): whether or not the group should be a system group. This only has an effect if the group doesn't exist yet.
+* **_kernel_arguments_** (object): describes the desired kernel arguments.
+ * **_should_exist_** (list of strings): the list of kernel arguments that should exist.
+ * **_should_not_exist_** (list of strings): the list of kernel arguments that should not exist.
+
+[part-types]: http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
+[rfc2397]: https://tools.ietf.org/html/rfc2397
+[systemd-escape]: https://www.freedesktop.org/software/systemd/man/systemd-escape.html
diff --git a/content/docs/latest/provisioning/config-transpiler/examples.md b/content/docs/latest/provisioning/config-transpiler/examples.md
new file mode 100644
index 00000000..cbc70692
--- /dev/null
+++ b/content/docs/latest/provisioning/config-transpiler/examples.md
@@ -0,0 +1,231 @@
+---
+title: Butane Config Examples
+linktitle: Examples
+weight: 20
+---
+
+Here you can find a bunch of simple examples for using Butane configs, with some explanations about what they do. The examples here are in no way comprehensive, for a full list of all the available fields check out the [Butane specification][spec].
+
+## Users and groups
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ password_hash: "$6$43y3tkl..."
+ ssh_authorized_keys:
+ - ssh-rsa ABCLKJASD...
+```
+
+This example modifies the existing `core` user, giving it a known password hash (this will enable login via password), and setting its ssh key.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: user1
+ password_hash: "$6$43y3tkl..."
+ ssh_authorized_keys:
+ - key1
+ - key2
+ - name: user2
+ ssh_authorized_keys:
+ - key3
+```
+
+This example will create two users, `user1` and `user2`. The first user has a password set and two ssh public keys authorized to log in as the user. The second user doesn't have a password set (so log in via password will be disabled), but have one ssh key.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: user1
+ password_hash: "$6$43y3tkl..."
+ ssh_authorized_keys:
+ - key1
+ home_dir: /home/user1
+ no_create_home: true
+ groups:
+ - wheel
+ - plugdev
+ shell: /bin/bash
+```
+
+This example creates one user, `user1`, with the password hash `$6$43y3tkl...`, and sets up one ssh public key for the user. The user is also given the home directory `/home/user1`, but it's not created, the user is added to the `wheel` and `plugdev` groups, and the user's shell is set to `/bin/bash`.
+
+### Generating a password hash
+
+If you choose to use a password instead of an SSH key, generating a safe hash is extremely important to the security of your system. Simplified hashes like md5crypt are trivial to crack on modern GPU hardware. Here are a few ways to generate secure hashes:
+
+```
+# On Debian/Ubuntu (via the package "whois")
+mkpasswd --method=SHA-512 --rounds=4096
+
+# OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
+openssl passwd -1
+
+# Python
+python -c "import crypt,random,string; print(crypt.crypt(input('clear-text password: '), '\$6\$' + ''.join([random.choice(string.ascii_letters + string.digits) for _ in range(16)])))"
+
+# Perl (change password and salt values)
+perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'
+```
+
+Using a higher number of rounds will help create more secure passwords, but given enough time, password hashes can be reversed. On most RPM based distributions there is a tool called mkpasswd available in the `expect` package, but this does not handle "rounds" nor advanced hashing algorithms.
+
+## Storage and files
+
+### Files
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /opt/file
+ contents:
+ inline: Hello, world!
+ mode: 0644
+ user:
+ id: 500
+ group:
+ id: 501
+```
+
+This example creates a file at `/opt/file` with the contents `Hello, world!`, permissions 0644 (so readable and writable by the owner, and only readable by everyone else), and the file is owned by user uid 500 and gid 501.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /opt/file2
+ contents:
+ source: http://example.com/file2
+ compression: gzip
+ verification:
+ hash: sha512-4ee6a9d20cc0e6c7ee187daffa6822bdef7f4cebe109eff44b235f97e45dc3d7a5bb932efc841192e46618f48a6f4f5bc0d15fd74b1038abf46bf4b4fd409f2e
+ mode: 0644
+```
+
+This example fetches a gzip-compressed file from `http://example.com/file2`, makes sure that it matches the provided sha512 hash, and writes it decompressed to `/opt/file2`.
+
+### Filesystems
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: /dev/disk/by-partlabel/ROOT
+ format: btrfs
+ wipe_filesystem: true
+ label: ROOT
+```
+
+This example formats the root filesystem to be `btrfs`.
+
+## systemd units
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: etcd-member.service
+ dropins:
+ - name: conf1.conf
+ contents: |
+ [Service]
+ Environment="ETCD_NAME=infra0"
+```
+
+This example adds a drop-in for the `etcd-member` unit, setting the name for etcd to `infra0` with an environment variable. More information on systemd dropins can be found in [the docs][dropins].
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: hello.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=A hello world unit!
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/echo "Hello, World!"
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+This example creates a new systemd unit called hello.service, enables it so it will run on boot, and defines the contents to simply echo `"Hello, World!"`.
+
+## systemd user units
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: flatcar
+ groups:
+ - systemd-journal
+storage:
+ directories:
+ - path: /etc/systemd/user/default.target.wants
+ mode: 0755
+ files:
+ - path: /etc/systemd/user/hello.service
+ mode: 0644
+ contents:
+ inline: |
+ [Unit]
+ Description=A hello world unit!
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/echo "Hello, World!"
+
+ [Install]
+ WantedBy=default.target
+ links:
+ - path: /etc/systemd/user/default.target.wants/hello.service
+ target: /etc/systemd/user/hello.service
+ hard: false
+```
+
+This example creates a new systemd user unit called `hello.service`, enables it with an explicit symlink (workaround for Ignition) so it will run on boot, and defines the contents to simply echo `"Hello, World!"`.
+
+_Note_: Adding a regular user like "flatcar" to the `systemd-journal` group can be useful if you want to access the journal logs with `journalctl --user --unit hello.service`. You can already access logs with `journactl --user-unit hello.service` from the default `core` user.
+
+## networkd units
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/network/static.network
+ contents:
+ inline: |
+ [Match]
+ Name=enp2s0
+
+ [Network]
+ Address=192.168.0.15/24
+ Gateway=192.168.0.1
+```
+
+This example creates a networkd unit to set the IP address on the `enp2s0` interface to the static address `192.168.0.15/24`, and sets an appropriate gateway. More information on networkd units in Flatcar Container Linux can be found in [the docs][networkd].
+
+
+[spec]: ./configuration
+[dropins]: ../../setup/systemd/drop-in-units
+[networkd]: ../../setup/customization/network-config-with-networkd
diff --git a/content/docs/latest/provisioning/config-transpiler/getting-started.md b/content/docs/latest/provisioning/config-transpiler/getting-started.md
new file mode 100644
index 00000000..ae93233b
--- /dev/null
+++ b/content/docs/latest/provisioning/config-transpiler/getting-started.md
@@ -0,0 +1,60 @@
+---
+title: Getting Started
+weight: 10
+aliases:
+ - ../../container-linux-config-transpiler/doc/getting-started
+ - ../../container-linux-config-transpiler/getting-started
+---
+
+
+`butane` is a tool that will consume a Butane configuration and produce an Ignition configuration file that can be given to a Container Linux machine when it first boots to set the machine up. Using this config, a machine can be told to create users, format the root filesystem, set up the network, install systemd units, and more.
+
+Butane configuration are YAML files conforming to Butane's schema. For more information on the schema, take a look at [configuration][1].
+
+`butane` can be downloaded from its [GitHub Releases page][4] or used via Docker (`docker run --rm -i quay.io/coreos/butane:latest < butane_config.yaml > ignition_config.json`).
+
+As a simple example, let's use `butane` to set the authorized ssh key for the core user on a Container Linux machine.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - ssh-rsa AAAAB3NzaC1yc...
+```
+
+In this above file, you'll want to set the `ssh-rsa AAAAB3NzaC1yc...` line to be your ssh public key (which is probably the contents of `~/.ssh/id_rsa.pub`, if you're on Linux).
+
+If we take this file and give it to `butane`:
+
+```
+$ docker run --rm -i quay.io/coreos/butane:latest < butane_config.yaml
+{"ignition":{"version":"3.3.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa AAAAB3NzaC1yc..."]}]}}
+```
+
+We can see that it produces a JSON file. This file isn't intended to be human-friendly, and will definitely be a pain to read/edit (especially if you have multi-line things like systemd units). Luckily, you shouldn't have to care about this file! Just provide it to a booting Container Linux machine and Ignition, the utility inside of Container Linux that receives this file, will know what to do with it.
+
+The method by which this file is provided to a Container Linux machine depends on the environment in which the machine is running. For instructions on a given provider, head over to the [list of supported platforms for Ignition][2].
+
+To see some examples for what else Butane can do, head over to the [examples][3].
+
+## Migration from Container Linux Config
+
+While quite similar, there are some changes needed to migrate a Container Linux Config to Butane.
+
+- The `variant` and `version` keys are required.
+- The Butane transpiler has no platform feature for templating with dynamic data. The resulting feature is still available by explicitly loading the metadata variables to reference [dynamic data][dynamic].
+- The high-level sections for `etcd`, `flannel`, `docker`, `update`, and `locksmith` are gone and instead the resulting units or files need to be written explicity. For etcd see the [cluster docs][cluster]. Both the `update` and `locksmith` fields go to `/etc/flatcar/update.conf`.
+- The `networkd` section is gone and instead the files need to be written directly to `/etc/systemd/network/` directory.
+- The `overwrite` field for files is not set to `true` by default anymore, so it needs to be explicitly set to `true` for the old behavior.
+- File entries can't specify filesystems anymore as was done with `filesystem: root` or `filesystem: oem`. Instead, they use the full path, and which filesystem this is depends on whether and how the initrd mount path is set for each specified filesystem.
+- Units only have the `enabled` field, support for `enable` got removed.
+
+[1]: ../config-transpiler/configuration
+[2]: https://coreos.github.io/ignition/supported-platforms/
+[3]: https://coreos.github.io/butane/examples/
+[4]: https://github.com/coreos/butane/releases
+[dynamic]: ../ignition/dynamic-data/
+[cluster]: ../../setup/clusters/
diff --git a/content/docs/latest/provisioning/ignition/_index.md b/content/docs/latest/provisioning/ignition/_index.md
new file mode 100644
index 00000000..b5a853c4
--- /dev/null
+++ b/content/docs/latest/provisioning/ignition/_index.md
@@ -0,0 +1,57 @@
+---
+content_type: ignition
+title: Ignition
+linktitle: Ignition
+description: Provisioning utility specially designed for Container OSs
+main_menu: true
+weight: 10
+aliases:
+ - ../ignition/what-is-ignition
+ - ../ignition
+---
+
+Ignition is a new provisioning utility designed specifically for container OSs like Flatcar Container Linux, which allows you to manipulate disks during early boot. This includes partitioning disks, formatting partitions, writing files (regular files, systemd units, networkd units, and more), and configuring users. On the first boot, Ignition reads its configuration from a source-of-truth (remote URL, network metadata service, or hypervisor bridge, for example) and applies the configuration.
+
+A [series of example configs][examples] are provided for reference. The specification can be found [here][ignition-specification].
+
+## Ignition vs coreos-cloudinit
+
+Ignition solves many of the same problems as [coreos-cloudinit][cloudinit] but in a simpler, more predictable, and more flexible manner. This is achieved with two major changes: Ignition only runs once and it does not handle variable substitution. Ignition has also fixed a number of pain points with regard to configuration.
+
+Instead of YAML, Ignition uses JSON for its configuration format. JSON's typing immediately eliminates problems like "off" being rewritten as "false", the "#cloud-config" header being stripped because comments *shouldn't* have meaning, and confusion around whether those file permissions were written in octal or decimal. Ignition's configuration is also versioned, which allows future development without persistent backward compatibility.
+
+### Ignition only runs once
+
+Even though Ignition only runs once, during the first boot of the system, it packs a powerful punch. Because Ignition runs so early in the boot process (in the initramfs, to be exact), it is able to repartition disks, format filesystems, create users, and write files, all before the userspace begins to boot.
+
+Because Ignition runs so early in the boot process, the network config is available for networkd to read when it first starts, and systemd services are already written to disk when systemd starts. [Configuring the network][network-config] is no longer an issue. This results in a simple startup, a faster startup, and the ability to accurately inspect the unit dependency graphs.
+
+### No variable substitution
+
+Because Ignition only runs once, there's no reason for it to incorporate dynamic data (like floating IP addresses, or compute regions).
+
+Instead, use Ignition to write static files and leverage systemd's environment variable expansion to insert dynamic data. The Ignition config should install a service which fetches the necessary runtime data, then any services which need this data (such as etcd or fleet) can rely on the installed service and source in their output. The result is that the data is only collected if and when it is needed. For supported platforms, Flatcar Container Linux provides a small utility (`coreos-metadata.service`) to help fetch this data.
+
+### When is Ignition executed
+
+On boot, GRUB checks the EFI System Partition for a file at `flatcar/first_boot` (or `coreos/first_boot` if the machine was updated from CoreOS CL) and sets `flatcar.first_boot=detected` if found. The `flatcar.first_boot` parameter is processed by a [systemd-generator] in the [initramfs] and if the parameter value is non-zero, the Ignition units are set as dependencies of `initrd.target`, causing Ignition to run. If the parameter is set to the special value `detected`, the `flatcar/first_boot` (or `coreos/first_boot`) file is deleted after Ignition runs successfully. You can schedule a re-run of Ignition with the `flatcar-reset` tool (available since Alpha 3535.0.0), which also takes care of cleaning up old rootfs state and keeping only the data from the rootfs you want to keep.
+
+Note that [PXE][supported-platforms] deployments don't use GRUB to boot, so `flatcar.first_boot=1` must be added to the boot arguments in order for Ignition to run. `detected` should not be specified so Ignition will not attempt to delete `flatcar/first_boot` (or `coreos/first_boot`).
+
+## Providing Ignition a config
+
+Ignition can read its config from a number of different locations, but only from one at a time. When running Flatcar Container Linux on the supported cloud providers, Ignition will read its config from the instance's userdata. This means that if Ignition is being used, it will not be possible to use other tools which also use this userdata (such as coreos-cloudinit). Bare metal installations and PXE boots can use the kernel boot parameters to point Ignition at the config.
+
+## Where is Ignition supported?
+
+In addition to providers supported by [upstream Ignition][ignition-supported], Flatcar [supports](https://github.com/flatcar/scripts/blob/main/sdk_container/src/third_party/coreos-overlay/sys-apps/ignition/files/0018-revert-internal-oem-drop-noop-OEMs.patch) cloudsigma, hyperv, interoute, niftycloud, rackspace[-onmetal], and vagrant.
+
+Ignition is under active development. Expect to see support for more images in the coming months.
+
+[examples]: https://github.com/coreos/ignition/blob/main/docs/examples.md
+[ignition-specification]: specification
+[cloudinit]: https://github.com/flatcar/coreos-cloudinit
+[network-config]: network-configuration
+[supported-platforms]: https://github.com/coreos/ignition/blob/main/docs/supported-platforms.md
+[systemd-generator]: http://www.freedesktop.org/software/systemd/man/systemd.generator.html
+[initramfs]: https://www.kernel.org/doc/Documentation/filesystems/ramfs-rootfs-initramfs.txt
diff --git a/content/docs/latest/provisioning/ignition/boot-process.md b/content/docs/latest/provisioning/ignition/boot-process.md
new file mode 100644
index 00000000..5c3035ce
--- /dev/null
+++ b/content/docs/latest/provisioning/ignition/boot-process.md
@@ -0,0 +1,119 @@
+---
+title: Flatcar Container Linux startup process
+linktitle: Boot process overview
+weight: 10
+aliases:
+ - ../../ignition/boot-process
+---
+
+The Flatcar Container Linux startup process is built on the standard [Linux startup process][linux-startup]. Since this process is already well documented and generally well understood, this document will focus on aspects specific to booting Flatcar Container Linux.
+
+## Bootloader
+
+[GRUB][grub] is the first program executed when a Flatcar Container Linux system boots. The Flatcar Container Linux [GRUB config][grub-config] has several roles.
+
+First, the GRUB config [specifies which `usr` partition to use][gptprio.next] from the two `usr` partitions Flatcar Container Linux uses to provide atomic upgrades and rollbacks.
+
+Second, GRUB [checks for a file called `flatcar/first_boot` in the EFI System Partition][check-file] to determine if this is the first time a machine has booted (or it checks for `coreos/first_boot` if the machine was updated from CoreOS CL). If that file is found, GRUB sets the `flatcar.first_boot=detected` Linux kernel command line parameter. This parameter is used in later stages of the boot process.
+
+Finally, GRUB [searches for the initial disk GUID][search-guid] (00000000-0000-0000-0000-000000000001) built into Flatcar Container Linux images. This GUID is randomized later in the boot process so that individual disks may be uniquely identified. If GRUB finds this GUID it sets another Linux kernel command line parameter, `flatcar.randomize_guid=00000000-0000-0000-0000-000000000001`.
+
+## Early user space
+
+After GRUB, the Flatcar Container Linux startup process moves into the initial RAM file system. The initramfs mounts the root filesystem, randomizes the disk GUID, and runs Ignition.
+
+If the `flatcar.randomize_guid` kernel parameter is provided, the disk with the specified GUID is given a new, random GUID.
+
+If the `flatcar.first_boot` kernel parameter is provided and non-zero, Ignition and networkd are started. networkd will use DHCP to set up temporary IP addresses and routes so that Ignition can fetch its configuration from the network.
+
+### Ignition
+
+When Ignition runs on Flatcar Container Linux, it reads the Linux command line, looking for `flatcar.oem.id`. Ignition uses this identifier to determine where to read the user-provided configuration and which provider-specific configuration to combine with the user's. This provider-specific configuration performs basic machine setup, and may include enabling `coreos-metadata-sshkeys@.service` (covered in more detail below).
+
+After Ignition runs successfully, if `flatcar.first_boot` was set to the special value `detected`, Ignition mounts the EFI System Partition and deletes the `flatcar/first_boot` file (or `coreos/first_boot` if the machine was updated from CoreOS CL).
+
+## User space
+
+After all of the tasks in the initramfs complete, the machine pivots into user space. It is at this point that systemd begins starting units, including, if it was enabled, `coreos-metadata-sshkeys@core.service`.
+
+### SSH keys
+
+`coreos-metadata-sshkeys@core.service` is responsible for fetching SSH keys from the machine's environment. The keys are written to `~core/.ssh/authorized_keys.d/coreos-metadata` and `update-ssh-keys` is run to update `~core/.ssh/authorized_keys`. On cloud platforms, the keys are read from the provider's metadata service. This service is not supported on all platforms and is enabled by Ignition *only* on those which are supported.
+
+### Reprovisioning
+
+To trigger a new Ignition run, you should use the `flatcar-reset` tool (available from Alpha 3535.0.0) for a (selective) cleanup of the root filesystem during the next boot. It takes care of cleaning up old state (e.g., files from the old configuration or any side effects such as state files) while keeping only the data you want to keep through the `--keep-paths` argument. The paths to keep can be specified as regular expressions. The machine ID can be kept through the `--keep-machine-id` argument (turning it into a kernel cmdline parameter because `/etc/machine-id` can't be preserved directly for systemd first boot semantics). It is also possible to specify that a local or a particular remote Ignition configuration should be used.
+
+When paths to keep are specified, only needed paths should be used and not those set up by the old Ignition config or side effects of it, to really discard the old configuration state. When a path specified is a directory, the contents are preserved as well because `MYPATH/.*` is automatically appended as an additional regular expression for paths to keep.
+To delete the contents of a directory but keep the directory itself, specify it as an equivalent regular expression in the form of `'^/etc/mypath'`, `'/etc/mypath$'`, `'/etc/mypat[h]'`, `'/etc/(mypath)'`, or `'(/etc/mypath)'`.
+
+The used regular expression language is that of `egrep`. Assuming you specified `/etc/mypath`, you can test which paths will be deleted with the following command (note the `-not`):
+
+```sh
+find / /etc -xdev -regextype egrep -not -regex '(/etc/mypath|/etc/mypath/.*)'
+```
+
+You can test which path will be kept with the following command (note the absence of `-not`):
+
+```sh
+find / /etc -xdev -regextype egrep -regex '(/etc/mypath|/etc/mypath/.*)'
+```
+
+Both `/` and `/etc` need to be specified because `/etc` is an overlay mount.
+
+Meaningful examples are:
+
+- `'/etc/ssh/ssh_host_.*'` to preserve SSH host keys
+- `/var/log` to preserve system logs
+- `/var/lib/docker` and `/var/lib/containerd` to preserve container state and images
+
+An example for selectively resetting the OS with retriggering Ignition while keeping SSH host keys, logs, and machine ID:
+
+```sh
+sudo flatcar-reset --keep-machine-id --keep-paths '/etc/ssh/ssh_host_.*' /var/log
+sudo systemctl reboot
+```
+
+#### Technical Details for Manual Ignition Re-runs
+
+Not recommended but possible is to either manually set `flatcar.first_boot=1` as temporary kernel command line parameter in GRUB or to create the flag file with `touch /boot/flatcar/first_boot` (or `/boot/coreos/first_boot` if the machine was updated from CoreOS CL).
+Be aware that if you changed the Ignition config in the mean time, old files not known to the new Ignition config will be kept, and any other runtime data, too.
+Systemd service presets are also not reevaluated automatically. This means that newly declared service units won't be enabled unless you also invalidate the machine ID or create the symlinks for the service targets.
+
+To ensure that the systemd service presets are reevaluated you should invalidate the machine ID executing `sudo rm /etc/machine-id` before the reboot. This will give the node a new machine ID unless you have added the current machine ID as kernel argument in `/usr/share/oem/grub.cfg` (append the line `set linux_append="$linux_append systemd.machine_id=..."` to the end of the file, with the current machine ID instead of `...`).
+
+If you can't do this, you have to create the symlinks for the service target through Ignition `links` entries.
+Here is an example config with an additional `links` entry that ensures that the new service unit is enabled if this config is used for reprovisioning:
+
+```
+{
+ "ignition": {
+ "version": "2.2.0"
+ },
+ "systemd": {
+ "units": [
+ {
+ "name": "my.service",
+ "enabled": true,
+ "contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
+ }
+ ]
+ },
+ "storage": {
+ "links": [
+ {
+ "filesystem": "root",
+ "path": "/etc/systemd/system/multi-user.target.wants/my.service",
+ "target": "/etc/systemd/system/my.service"
+ }
+ ]
+ }
+}
+```
+
+[check-file]: https://github.com/flatcar/scripts/blob/80e49d190ff99e8c489bbf420dc2bc248ae553e3/build_library/grub.cfg#L68-L74
+[gptprio.next]: https://github.com/flatcar/scripts/blob/80e49d190ff99e8c489bbf420dc2bc248ae553e3/build_library/grub.cfg#L128
+[grub]: https://www.gnu.org/software/grub/
+[grub-config]: https://github.com/flatcar/scripts/blob/80e49d190ff99e8c489bbf420dc2bc248ae553e3/build_library/grub.cfg
+[linux-startup]: https://en.wikipedia.org/wiki/Linux_startup_process
+[search-guid]: https://github.com/flatcar/scripts/blob/9e1c23f3f44d2751076e770f43f7a6db05d49652/build_library/grub.cfg#L73-L78
diff --git a/content/docs/latest/provisioning/ignition/dynamic-data.md b/content/docs/latest/provisioning/ignition/dynamic-data.md
new file mode 100644
index 00000000..c358a2bd
--- /dev/null
+++ b/content/docs/latest/provisioning/ignition/dynamic-data.md
@@ -0,0 +1,81 @@
+---
+title: Referencing dynamic data
+weight: 40
+aliases:
+ - ./metadata
+ - ../../ignition/metadata
+---
+
+## Overview
+
+Sometimes it can be useful to refer to data in an Ignition config that isn't known until a machine boots, like its network address. This can be accomplished with [afterburn][afterburn] (previously called `coreos-metadata`). Afterburn is a very basic utility that fetches information about the current machine and makes it available for consumption. By making it a dependency of services which requires this information, systemd will ensure that coreos-metadata has successfully completed before starting these services. These services can then simply source the fetched information and let systemd perform the environment variable expansions.
+
+While the `coreos-metadata.service` runs afterburn, it will not set the hostname. The hostname is set either through an OEM agent or for particular platforms through afterburn in the initramfs. If afterburn supports your platform and is not invoked in the initramfs by default, you can run it later to set the hostname (`--hostname=/etc/hostname`).
+
+## Supported data by provider
+
+The information available for each provider can be found in the [afterburn docs][afterburndocs] - the variable names however differ in the used prefix: In Flatcar Container Linux (since CoreOS Container Linux), they are called `COREOS_*` instead of `AFTERBURN_*`. Also, `*_AWS_*` is `*_EC2_*` and `*_GCP_*` is `*_GCE_*`.
+These variables are written to `/run/metadata/flatcar` as environment file from where you can either source them or set them up as environment for a systemd unit (note that your service should be started `After=coreos-metadata.service`).
+
+## Custom metadata providers
+
+To use the `custom` platform, create a coreos-metadata service unit to execute your own custom metadata fetcher. The custom metadata fetcher must write an environment file `/run/metadata/flatcar` defining a `COREOS_CUSTOM_*` environment variable for every piece of dynamic data you want to use.
+
+### Example
+
+Assume `https://example.com/metadata-script.sh` is a script which communicates with a metadata service and then writes the following file to `/run/metadata/flatcar`:
+
+```
+COREOS_CUSTOM_HOSTNAME=foobar
+COREOS_CUSTOM_PRIVATE_IPV4=
+COREOS_CUSTOM_PUBLIC_IPV4=
+```
+
+The following Butane config downloads the metadata fetching script, sets up a `coreos-metadata` service to use the script, and configures a test service using the metadata provided.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: "/opt/get-metadata.sh"
+ contents:
+ source: "https://example.com/metadata-script.sh"
+
+systemd:
+ units:
+ - name: "coreos-metadata.service"
+ contents: |
+ [Unit]
+ Description=Metadata agent
+ After=nss-lookup.target
+ After=network-online.target
+ Wants=network-online.target
+ [Service]
+ Type=oneshot
+ Restart=on-failure
+ RemainAfterExit=yes
+ ExecStart=/opt/get-metadata.sh
+ - name: "test.service"
+ enabled: true
+ contents: |
+ [Unit]
+ After=coreos-metadata.service
+ Requires=coreos-metadata.service
+ [Service]
+ EnvironmentFile=/run/metadata/flatcar
+ Type=oneshot
+ RemainAfterExit=yes
+ Restart=on-failure
+ # Print the custom hostname variable from /run/metadata/flatcar
+ ExecStart=echo "${COREOS_CUSTOM_HOSTNAME}"
+ # Directly use /run/metadata/flatcar to print the private IP address out (with multiple patterns to work with any provider not only custom)
+ ExecStart=bash -C 'cat /run/metadata/flatcar | grep -v -E '(IPV6|GATEWAY)' | grep IP | grep -E '(PRIVATE|LOCAL|DYNAMIC)' | cut -d = -f 2'
+ [Install]
+ WantedBy=multi-user.target
+```
+
+You can find another example in the [VMware docs](../../installing/cloud/vmware.md).
+
+
+[afterburndocs]: https://github.com/coreos/afterburn/blob/main/docs/usage/attributes.md#metadata-attributes
diff --git a/content/docs/latest/provisioning/ignition/network-configuration.md b/content/docs/latest/provisioning/ignition/network-configuration.md
new file mode 100644
index 00000000..b2dc209f
--- /dev/null
+++ b/content/docs/latest/provisioning/ignition/network-configuration.md
@@ -0,0 +1,83 @@
+---
+title: Network configuration
+weight: 20
+aliases:
+ - ../../ignition/network-configuration
+---
+
+Configuring networkd with Ignition is a very straightforward task. Because Ignition runs before networkd starts, configuration is just a matter of writing the desired config to disk. The Ignition config has a specific section dedicated to this.
+
+Each of these examples is written in version 2.0.0 of the config. Ensure that any configuration matches the version that Ignition expects.
+
+## Static networking
+
+In this example, the network interface with the name "eth0" will be given the IP address 10.0.1.7. A typical interface will need more configuration and may use all of the options of a [network unit][network].
+
+```json
+{
+ "ignition": { "version": "2.0.0" },
+ "networkd": {
+ "units": [{
+ "name": "00-eth0.network",
+ "contents": "[Match]\nName=eth0\n\n[Network]\nAddress=10.0.1.7"
+ }]
+ }
+}
+```
+
+This configuration will instruct Ignition to create a single network unit named "00-eth0.network" with the contents:
+
+```ini
+[Match]
+Name=eth0
+
+[Network]
+Address=10.0.1.7
+```
+
+When the system boots, networkd will read this config and assign the IP address to eth0.
+
+### Using static IP addresses with Ignition
+
+Because Ignition writes network configuration to disk for networkd to use later, statically-configured interfaces will be brought online only after Ignition has run. If static IP configuration is required to download remote configs before Ignition has run, use one of the following two forms of supported kernel command-line arguments.
+
+##### Single-Interface Format
+This format can configure a static IP address on the named interface, or on all interfaces when unspecified.
+
+* `ip=` to specify the IP address, for example `ip=10.0.2.42`
+* `netmask=` to specify the netmask, for example `netmask=255.255.255.0`
+* `gateway=` to specify the gateway address, for example `gateway=10.0.2.2`
+* `ksdevice=` (optional) to limit configuration to the named interface, for example `ksdevice=eth0`
+
+##### Multi-Interface Format
+This format can be specified multiple times to apply unique static configuration to different interfaces. Omitting the `` parameter will apply the configuration to all interfaces that have not yet been configured.
+
+* `ip=::::::none[:[:]]`, for example `ip=10.0.2.42::10.0.2.2:255.255.255.0::eth0:none:8.8.8.8:8.8.4.4`
+
+## Bonded NICs
+
+In this example, all of the network interfaces whose names begin with "eth" will be bonded together to form "bond0". This new interface will then be configured to use DHCP.
+
+```json
+{
+ "ignition": { "version": "2.0.0" },
+ "networkd": {
+ "units": [
+ {
+ "name": "00-eth.network",
+ "contents": "[Match]\nName=eth*\n\n[Network]\nBond=bond0"
+ },
+ {
+ "name": "10-bond0.netdev",
+ "contents": "[NetDev]\nName=bond0\nKind=bond"
+ },
+ {
+ "name": "20-bond0.network",
+ "contents": "[Match]\nName=bond0\n\n[Network]\nDHCP=true"
+ }
+ ]
+ }
+}
+```
+
+[network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html
diff --git a/content/docs/latest/provisioning/ignition/specification.md b/content/docs/latest/provisioning/ignition/specification.md
new file mode 100644
index 00000000..e75a3e76
--- /dev/null
+++ b/content/docs/latest/provisioning/ignition/specification.md
@@ -0,0 +1,359 @@
+---
+title: Ignition Specification
+linktitle: Specification
+weight: 30
+---
+
+Ignition uses a JSON format that is specified in several major versions: v1, v2 and v3 (which itself has minor versions like `2.3.0`). While v1 and v2 are still supported in Flatcar Container Linux, from version 3185.0.0, it's recommended to write new configuration with v3.
+
+
+## Ignition v3
+
+Starting from release 3185.0.0, Ignition v3 (specification 3.3.0) is supported in addition of Ignition v2. There are some things to be aware of:
+* v1 and v2 are still supported and get translated at runtime; while this is tested well there may be corner cases where the v2 config relied on unspecified behavior
+* `clevis` is not supported
+* `kernelArguments` are supported and will persist the changes in `/usr/share/oem/grub.cfg` before the reboot but it only works for unconditional `set linux_append` statements in grub.cfg and `linux_console` is not considered
+* The high-level [Butane YAML format][butane-spec] can be used to generate Ignition v3 configs:
+
+```bash
+cat > config.yml < ignition.json
+```
+
+* **ignition** (object): metadata about the configuration itself.
+ * **version** (string): the semantic version number of the spec. The spec version must be compatible with the latest version (`3.3.0`). Compatibility requires the major versions to match and the spec version be less than or equal to the latest version. `-experimental` versions compare less than the final version with the same number, and previous experimental versions are not accepted.
+ * **_config_** (objects): options related to the configuration.
+ * **_merge_** (list of objects): a list of the configs to be merged to the current config.
+ * **source** (string): the URL of the config. Supported schemes are `http`, `https`, `s3`, `gs`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_compression_** (string): the type of compression used on the config (null or gzip). Compression cannot be used with S3.
+ * **_httpHeaders_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the config.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_replace_** (object): the config that will replace the current.
+ * **source** (string): the URL of the config. Supported schemes are `http`, `https`, `s3`, `gs`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_compression_** (string): the type of compression used on the config (null or gzip). Compression cannot be used with S3.
+ * **_httpHeaders_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the config.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is either `sha512` or `sha256`.
+ * **_timeouts_** (object): options relating to `http` timeouts when fetching files over `http` or `https`.
+ * **_httpResponseHeaders_** (integer) the time to wait (in seconds) for the server's response headers (but not the body) after making a request. 0 indicates no timeout. Default is 10 seconds.
+ * **_httpTotal_** (integer) the time limit (in seconds) for the operation (connection, request, and response), including retries. 0 indicates no timeout. Default is 0.
+ * **_security_** (object): options relating to network security.
+ * **_tls_** (object): options relating to TLS when fetching resources over `https`.
+ * **_certificateAuthorities_** (list of objects): the list of additional certificate authorities (in addition to the system authorities) to be used for TLS verification when fetching over `https`. All certificate authorities must have a unique `source`.
+ * **source** (string): the URL of the certificate bundle (in PEM format). The bundle can contain multiple concatenated certificates. Supported schemes are `http`, `https`, `s3`, `gs`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_compression_** (string): the type of compression used on the certificate (null or gzip). Compression cannot be used with S3.
+ * **_httpHeaders_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the certificate.
+ * **_hash_** (string): the hash of the certificate, in the form `-` where type is either `sha512` or `sha256`.
+ * **_proxy_** (object): options relating to setting an `HTTP(S)` proxy when fetching resources.
+ * **_httpProxy_** (string): will be used as the proxy URL for HTTP requests and HTTPS requests unless overridden by `httpsProxy` or `noProxy`.
+ * **_httpsProxy_** (string): will be used as the proxy URL for HTTPS requests unless overridden by `noProxy`.
+ * **_noProxy_** (list of strings): specifies a list of strings to hosts that should be excluded from proxying. Each value is represented by an `IP address prefix (1.2.3.4)`, `an IP address prefix in CIDR notation (1.2.3.4/8)`, `a domain name`, or `a special DNS label (*)`. An IP address prefix and domain name can also include a literal port number `(1.2.3.4:80)`. A domain name matches that name and all subdomains. A domain name with a leading `.` matches subdomains only. For example `foo.com` matches `foo.com` and `bar.foo.com`; `.y.com` matches `x.y.com` but not `y.com`. A single asterisk `(*)` indicates that no proxying should be done.
+* **_storage_** (object): describes the desired state of the system's storage devices.
+ * **_disks_** (list of objects): the list of disks to be configured and their options. Every entry must have a unique `device`.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **_wipeTable_** (boolean): whether or not the partition tables shall be wiped. When true, the partition tables are erased before any further manipulation. Otherwise, the existing entries are left intact.
+ * **_partitions_** (list of objects): the list of partitions and their configuration for this particular disk. Every partition must have a unique `number`, or if 0 is specified, a unique `label`.
+ * **_label_** (string): the PARTLABEL for the partition.
+ * **_number_** (integer): the partition number, which dictates its position in the partition table (one-indexed). If zero, use the next available partition slot.
+ * **_sizeMiB_** (integer): the size of the partition (in mebibytes). If zero, the partition will be made as large as possible.
+ * **_startMiB_** (integer): the start of the partition (in mebibytes). If zero, the partition will be positioned at the start of the largest block available.
+ * **_typeGuid_** (string): the GPT [partition type GUID][part-types]. If omitted, the default will be 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem data).
+ * **_guid_** (string): the GPT unique partition GUID.
+ * **_wipePartitionEntry_** (boolean) if true, Ignition will clobber an existing partition if it does not match the config. If false (default), Ignition will fail instead.
+ * **_shouldExist_** (boolean) whether or not the partition with the specified `number` should exist. If omitted, it defaults to true. If false Ignition will either delete the specified partition or fail, depending on `wipePartitionEntry`. If false `number` must be specified and non-zero and `label`, `start`, `size`, `guid`, and `typeGuid` must all be omitted.
+ * **_resize_** (boolean) whether or not the existing partition should be resized. If omitted, it defaults to false. If true, Ignition will resize an existing partition if it matches the config in all respects except the partition size.
+ * **_raid_** (list of objects): the list of RAID arrays to be configured. Every RAID array must have a unique `name`.
+ * **name** (string): the name to use for the resulting md device.
+ * **level** (string): the redundancy level of the array (e.g. linear, raid1, raid5, etc.).
+ * **devices** (list of strings): the list of devices (referenced by their absolute path) in the array.
+ * **_spares_** (integer): the number of spares (if applicable) in the array.
+ * **_options_** (list of strings): any additional options to be passed to mdadm.
+ * **_filesystems_** (list of objects): the list of filesystems to be configured. `device` and `format` need to be specified. Every filesystem must have a unique `device`.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **format** (string): the filesystem format (ext4, btrfs, xfs, vfat, swap, or none).
+ * **_path_** (string): the mount-point of the filesystem while Ignition is running relative to where the root filesystem will be mounted. This is not necessarily the same as where it should be mounted in the real root, but it is encouraged to make it the same.
+ * **_wipeFilesystem_** (boolean): whether or not to wipe the device before filesystem creation, see [the documentation on filesystems](ignition-fs-reuse) for more information. Defaults to false.
+ * **_label_** (string): the label of the filesystem.
+ * **_uuid_** (string): the uuid of the filesystem.
+ * **_options_** (list of strings): any additional options to be passed to the format-specific mkfs utility.
+ * **_mountOptions_** (list of strings): any special options to be passed to the mount command.
+ * **_files_** (list of objects): the list of files to be written. Every file, directory and link must have a unique `path`.
+ * **path** (string): the absolute path to the file.
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. `contents.source` must be specified if `overwrite` is true. Defaults to false.
+ * **_contents_** (object): options related to the contents of the file.
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the file contents. Supported schemes are `http`, `https`, `tftp`, `s3`, `gs`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified. If source is omitted and a regular file already exists at the path, Ignition will do nothing. If source is omitted and no file exists, an empty file will be created.
+ * **_httpHeaders_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the file contents.
+ * **_hash_** (string): the hash of the contents, in the form `-` where type is either `sha512` or `sha256`.
+ * **_append_** (list of objects): list of contents to be appended to the file. Follows the same stucture as `contents`
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the contents to append. Supported schemes are `http`, `https`, `tftp`, `s3`, `gs`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_httpHeaders_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the appended contents.
+ * **_hash_** (string): the hash of the contents, in the form `-` where type is either `sha512` or `sha256`.
+ * **_mode_** (integer): the file's permission mode. Note that the mode must be properly specified as a **decimal** value (i.e. 0644 -> 420). If not specified, the permission mode for files defaults to 0644 or the existing file's permissions if `overwrite` is false, `contents.source` is unspecified, and a file already exists at the path.
+ * **_user_** (object): specifies the file's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group of the owner.
+ * **_id_** (integer): the group ID of the owner.
+ * **_name_** (string): the group name of the owner.
+ * **_directories_** (list of objects): the list of directories to be created. Every file, directory, and link must have a unique `path`.
+ * **path** (string): the absolute path to the directory.
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. If false and a directory already exists at the path, Ignition will only set its permissions. If false and a non-directory exists at that path, Ignition will fail. Defaults to false.
+ * **_mode_** (integer): the directory's permission mode. Note that the mode must be properly specified as a **decimal** value (i.e. 0755 -> 493). If not specified, the permission mode for directories defaults to 0755 or the mode of an existing directory if `overwrite` is false and a directory already exists at the path.
+ * **_user_** (object): specifies the directory's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group of the owner.
+ * **_id_** (integer): the group ID of the owner.
+ * **_name_** (string): the group name of the owner.
+ * **_links_** (list of objects): the list of links to be created. Every file, directory, and link must have a unique `path`.
+ * **path** (string): the absolute path to the link
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. If overwrite is false and a matching link exists at the path, Ignition will only set the owner and group. Defaults to false.
+ * **_user_** (object): specifies the symbolic link's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group of the owner.
+ * **_id_** (integer): the group ID of the owner.
+ * **_name_** (string): the group name of the owner.
+ * **target** (string): the target path of the link
+ * **_hard_** (boolean): a symbolic link is created if this is false, a hard one if this is true.
+ * **_luks_** (list of objects): the list of luks devices to be created. Every device must have a unique `name`.
+ * **name** (string): the name of the luks device.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **_keyFile_** (string): options related to the contents of the key file.
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the contents to append. Supported schemes are `http`, `https`, `tftp`, `s3`, `gs`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_httpHeaders_** (list of objects): a list of HTTP headers to be added to the request. Available for `http` and `https` source schemes only.
+ * **name** (string): the header name.
+ * **_value_** (string): the header contents.
+ * **_verification_** (object): options related to the verification of the key file.
+ * **_hash_** (string): the hash of the contents, in the form `-` where type is either `sha512` or `sha256`.
+ * **_label_** (string): the label of the luks device.
+ * **_uuid_** (string): the uuid of the luks device.
+ * **_options_** (list of strings): any additional options to be passed to the cryptsetup utility.
+ * **_wipeVolume_** (boolean): whether or not to wipe the device before volume creation, see [the documentation on filesystems](ignition-fs-reuse) for more information.
+ * **_clevis_** (object): describes the clevis configuration for the luks device.
+ * **_tang_** (list of objects): describes a tang server. Every server must have a unique `url`.
+ * **url** (string): url of the tang server.
+ * **thumbprint** (string): thumbprint of a trusted signing key.
+ * **_tpm2_** (bool): whether or not to use a tpm2 device.
+ * **_threshold_** (int): sets the minimum number of pieces required to decrypt the device. Default is 1.
+ * **_custom_** (object): overrides the clevis configuration. The `pin` & `config` will be passed directly to `clevis luks bind`. If specified, all other clevis options must be omitted.
+ * **pin** (string): the clevis pin.
+ * **config** (string): the clevis configuration JSON.
+ * **_needsNetwork_** (bool): whether or not the device requires networking.
+* **_systemd_** (object): describes the desired state of the systemd units.
+ * **_units_** (list of objects): the list of systemd units. Every unit must have a unique `name`.
+ * **name** (string): the name of the unit. This must be suffixed with a valid unit type (e.g. "thing.service").
+ * **_enabled_** (boolean): whether or not the service shall be enabled. When true, the service is enabled. When false, the service is disabled. When omitted, the service is unmodified. In order for this to have any effect, the unit must have an install section.
+ * **_mask_** (boolean): whether or not the service shall be masked. When true, the service is masked by symlinking it to `/dev/null`. When false, the service is unmasked by deleting the symlink to `/dev/null` if it exists.
+ * **_contents_** (string): the contents of the unit.
+ * **_dropins_** (list of objects): the list of drop-ins for the unit. Every drop-in must have a unique `name`.
+ * **name** (string): the name of the drop-in. This must be suffixed with ".conf".
+ * **_contents_** (string): the contents of the drop-in.
+* **_passwd_** (object): describes the desired additions to the passwd database.
+ * **_users_** (list of objects): the list of accounts that shall exist. All users must have a unique `name`.
+ * **name** (string): the username for the account.
+ * **_passwordHash_** (string): the encrypted password for the account.
+ * **_sshAuthorizedKeys_** (list of strings): a list of SSH keys to be added as an SSH key fragment at `.ssh/authorized_keys.d/ignition` in the user's home directory. All SSH keys must be unique.
+ * **_uid_** (integer): the user ID of the account.
+ * **_gecos_** (string): the GECOS field of the account.
+ * **_homeDir_** (string): the home directory of the account.
+ * **_noCreateHome_** (boolean): whether or not to create the user's home directory. This only has an effect if the account doesn't exist yet.
+ * **_primaryGroup_** (string): the name of the primary group of the account.
+ * **_groups_** (list of strings): the list of supplementary groups of the account.
+ * **_noUserGroup_** (boolean): whether or not to create a group with the same name as the user. This only has an effect if the account doesn't exist yet.
+ * **_noLogInit_** (boolean): whether or not to add the user to the lastlog and faillog databases. This only has an effect if the account doesn't exist yet.
+ * **_shell_** (string): the login shell of the new account.
+ * **_shouldExist_** (boolean) whether or not the user with the specified `name` should exist. If omitted, it defaults to true. If false, then Ignition will delete the specified user.
+ * **_system_** (bool): whether or not this account should be a system account. This only has an effect if the account doesn't exist yet.
+ * **_groups_** (list of objects): the list of groups to be added. All groups must have a unique `name`.
+ * **name** (string): the name of the group.
+ * **_gid_** (integer): the group ID of the new group.
+ * **_passwordHash_** (string): the encrypted password of the new group.
+ * **_shouldExist_** (boolean) whether or not the group with the specified `name` should exist. If omitted, it defaults to true. If false, then Ignition will delete the specified group.
+ * **_system_** (bool): whether or not the group should be a system group. This only has an effect if the group doesn't exist yet.
+* **_kernelArguments_** (object): describes the desired kernel arguments.
+ * **_shouldExist_** (list of strings): the list of kernel arguments that should exist.
+ * **_shouldNotExist_** (list of strings): the list of kernel arguments that should not exist.
+
+## Ignition v2
+
+Ignition v2 is not developed anymore but still supported (specification 2.3.0), the high-level [Container Linux Config YAML format][ct-config] can be used to emit Ignition v2 configs:
+
+* **ignition** (object): metadata about the configuration itself.
+ * **version** (string): the semantic version number of the spec. The spec version must be compatible with the latest version (`2.3.0`). Compatibility requires the major versions to match and the spec version be less than or equal to the latest version. `-experimental` versions compare less than the final version with the same number, and previous experimental versions are not accepted.
+ * **_config_** (objects): options related to the configuration.
+ * **_append_** (list of objects): a list of the configs to be appended to the current config.
+ * **source** (string): the URL of the config. Supported schemes are `http`, `https`, `s3`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_verification_** (object): options related to the verification of the config.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is `sha512`.
+ * **_replace_** (object): the config that will replace the current.
+ * **source** (string): the URL of the config. Supported schemes are `http`, `https`, `s3`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_verification_** (object): options related to the verification of the config.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is `sha512`.
+ * **_timeouts_** (object): options relating to `http` timeouts when fetching files over `http` or `https`.
+ * **_httpResponseHeaders_** (integer) the time to wait (in seconds) for the server's response headers (but not the body) after making a request. 0 indicates no timeout. Default is 10 seconds.
+ * **_httpTotal_** (integer) the time limit (in seconds) for the operation (connection, request, and response), including retries. 0 indicates no timeout. Default is 0.
+ * **_security_** (object): options relating to network security.
+ * **_tls_** (object): options relating to TLS when fetching resources over `https`.
+ * **_certificateAuthorities_** (list of objects): the list of additional certificate authorities (in addition to the system authorities) to be used for TLS verification when fetching over `https`.
+ * **source** (string): the URL of the certificate (in PEM format). Supported schemes are `http`, `https`, `s3`, `tftp`, and [`data`][rfc2397]. Note: When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_verification_** (object): options related to the verification of the certificate.
+ * **_hash_** (string): the hash of the certificate, in the form `-` where type is sha512.
+* **_storage_** (object): describes the desired state of the system's storage devices.
+ * **_disks_** (list of objects): the list of disks to be configured and their options.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **_wipeTable_** (boolean): whether or not the partition tables shall be wiped. When true, the partition tables are erased before any further manipulation. Otherwise, the existing entries are left intact.
+ * **_partitions_** (list of objects): the list of partitions and their configuration for this particular disk.
+ * **_label_** (string): the PARTLABEL for the partition.
+ * **_number_** (integer): the partition number, which dictates it's position in the partition table (one-indexed). If zero, use the next available partition slot.
+ * **_sizeMiB_** (integer): the size of the partition (in mebibytes). If zero, the partition will be made as large as possible.
+ * **_startMiB_** (integer): the start of the partition (in mebibytes). If zero, the partition will be positioned at the start of the largest block available.
+ * **_size_** (integer, DEPRECATED): the size of the partition (in device logical sectors, 512 or 4096 bytes). If zero, the partition will be made as large as possible. This object has been marked for deprecation, please use **_sizeMiB_** field instead.
+ * **_start_** (integer, DEPRECATED): the start of the partition (in device logical sectors). If zero, the partition will be positioned at the start of the largest block available. This object has been marked for deprecation, please use **_startMiB_** field instead.
+ * **_typeGuid_** (string): the GPT [partition type GUID][part-types]. If omitted, the default will be 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem data).
+ * **_guid_** (string): the GPT unique partition GUID.
+ * **_wipePartitionEntry_** (boolean) if true, Ignition will clobber an existing partition if it does not match the config. If false (default), Ignition will fail instead.
+ * **_shouldExist_** (boolean) whether or not the partition with the specified `number` should exist. If omitted, it defaults to true. If false Ignition will either delete the specified partition or fail, depending on `wipePartitionEntry`. If false `number` must be specified and non-zero and `label`, `start`, `size`, `guid`, and `typeGuid` must all be omitted.
+ * **_raid_** (list of objects): the list of RAID arrays to be configured.
+ * **name** (string): the name to use for the resulting md device.
+ * **level** (string): the redundancy level of the array (e.g. linear, raid1, raid5, etc.).
+ * **devices** (list of strings): the list of devices (referenced by their absolute path) in the array.
+ * **_spares_** (integer): the number of spares (if applicable) in the array.
+ * **_options_** (list of strings): any additional options to be passed to mdadm.
+ * **_filesystems_** (list of objects): the list of filesystems to be configured and/or used in the "files" section. Either "mount" or "path" needs to be specified.
+ * **_name_** (string): the identifier for the filesystem, internal to Ignition. This is only required if the filesystem needs to be referenced in the "files" section.
+ * **_mount_** (object): contains the set of mount and formatting options for the filesystem. A non-null entry indicates that the filesystem should be mounted before it is used by Ignition.
+ * **device** (string): the absolute path to the device. Devices are typically referenced by the `/dev/disk/by-*` symlinks.
+ * **format** (string): the filesystem format (ext4, btrfs, xfs, vfat, or swap).
+ * **_wipeFilesystem_** (boolean): whether or not to wipe the device before filesystem creation, see [the documentation on filesystems](ignition-fs-reuse) for more information.
+ * **_label_** (string): the label of the filesystem.
+ * **_uuid_** (string): the uuid of the filesystem.
+ * **_options_** (list of strings): any additional options to be passed to the format-specific mkfs utility.
+ * **_create_** (object, DEPRECATED): contains the set of options to be used when creating the filesystem.
+ * **_force_** (boolean, DEPRECATED): whether or not the create operation shall overwrite an existing filesystem.
+ * **_options_** (list of strings, DEPRECATED): any additional options to be passed to the format-specific mkfs utility.
+ * **_path_** (string): the mount-point of the filesystem. A non-null entry indicates that the filesystem has already been mounted by the system at the specified path. This is really only useful for "/sysroot".
+ * **_files_** (list of objects): the list of files to be written.
+ * **filesystem** (string): the internal identifier of the filesystem in which to write the file. This matches the last filesystem with the given identifier.
+ * **path** (string): the absolute path to the file.
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path. Defaults to true.
+ * **_append_** (boolean): whether to append to the specified file. Creates a new file if nothing exists at the path. Cannot be set if overwrite is set to true.
+ * **_contents_** (object): options related to the contents of the file.
+ * **_compression_** (string): the type of compression used on the contents (null or gzip). Compression cannot be used with S3.
+ * **_source_** (string): the URL of the file contents. Supported schemes are `http`, `https`, `tftp`, `s3`, and [`data`][rfc2397]. When using `http`, it is advisable to use the verification option to ensure the contents haven't been modified.
+ * **_verification_** (object): options related to the verification of the file contents.
+ * **_hash_** (string): the hash of the config, in the form `-` where type is `sha512`.
+ * **_mode_** (integer): the file's permission mode. Note that the mode must be properly specified as a **decimal** value (i.e. 0644 -> 420).
+ * **_user_** (object): specifies the file's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group of the owner.
+ * **_id_** (integer): the group ID of the owner.
+ * **_name_** (string): the group name of the owner.
+ * **_directories_** (list of objects): the list of directories to be created.
+ * **filesystem** (string): the internal identifier of the filesystem in which to create the directory. This matches the last filesystem with the given identifier.
+ * **path** (string): the absolute path to the directory.
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path.
+ * **_mode_** (integer): the directory's permission mode. Note that the mode must be properly specified as a **decimal** value (i.e. 0755 -> 493).
+ * **_user_** (object): specifies the directory's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group of the owner.
+ * **_id_** (integer): the group ID of the owner.
+ * **_name_** (string): the group name of the owner.
+ * **_links_** (list of objects): the list of links to be created
+ * **filesystem** (string): the internal identifier of the filesystem in which to write the link. This matches the last filesystem with the given identifier.
+ * **path** (string): the absolute path to the link
+ * **_overwrite_** (boolean): whether to delete preexisting nodes at the path.
+ * **_user_** (object): specifies the symbolic link's owner.
+ * **_id_** (integer): the user ID of the owner.
+ * **_name_** (string): the user name of the owner.
+ * **_group_** (object): specifies the group of the owner.
+ * **_id_** (integer): the group ID of the owner.
+ * **_name_** (string): the group name of the owner.
+ * **target** (string): the target path of the link
+ * **_hard_** (boolean): a symbolic link is created if this is false, a hard one if this is true.
+* **_systemd_** (object): describes the desired state of the systemd units.
+ * **_units_** (list of objects): the list of systemd units.
+ * **name** (string): the name of the unit. This must be suffixed with a valid unit type (e.g. "thing.service").
+ * **_enable_** (boolean, DEPRECATED): whether or not the service shall be enabled. When true, the service is enabled. In order for this to have any effect, the unit must have an install section.
+ * **_enabled_** (boolean): whether or not the service shall be enabled. When true, the service is enabled. When false, the service is disabled. When omitted, the service is unmodified. In order for this to have any effect, the unit must have an install section.
+ * **_mask_** (boolean): whether or not the service shall be masked. When true, the service is masked by symlinking it to `/dev/null`.
+ * **_contents_** (string): the contents of the unit.
+ * **_dropins_** (list of objects): the list of drop-ins for the unit.
+ * **name** (string): the name of the drop-in. This must be suffixed with ".conf".
+ * **_contents_** (string): the contents of the drop-in.
+* **_networkd_** (object): describes the desired state of the networkd files.
+ * **_units_** (list of objects): the list of networkd files.
+ * **name** (string): the name of the file. This must be suffixed with a valid unit type (e.g. "00-eth0.network").
+ * **_contents_** (string): the contents of the networkd file.
+ * **_dropins_** (list of objects): the list of drop-ins for the unit.
+ * **name** (string): the name of the drop-in. This must be suffixed with ".conf".
+ * **_contents_** (string): the contents of the drop-in.
+* **_passwd_** (object): describes the desired additions to the passwd database.
+ * **_users_** (list of objects): the list of accounts that shall exist.
+ * **name** (string): the username for the account.
+ * **_passwordHash_** (string): the encrypted password for the account.
+ * **_sshAuthorizedKeys_** (list of strings): a list of SSH keys to be added to the user's authorized_keys.
+ * **_uid_** (integer): the user ID of the account.
+ * **_gecos_** (string): the GECOS field of the account.
+ * **_homeDir_** (string): the home directory of the account.
+ * **_noCreateHome_** (boolean): whether or not to create the user's home directory. This only has an effect if the account doesn't exist yet.
+ * **_primaryGroup_** (string): the name of the primary group of the account.
+ * **_groups_** (list of strings): the list of supplementary groups of the account.
+ * **_noUserGroup_** (boolean): whether or not to create a group with the same name as the user. This only has an effect if the account doesn't exist yet.
+ * **_noLogInit_** (boolean): whether or not to add the user to the lastlog and faillog databases. This only has an effect if the account doesn't exist yet.
+ * **_shell_** (string): the login shell of the new account.
+ * **_system_** (bool): whether or not this account should be a system account. This only has an effect if the account doesn't exist yet.
+ * **_create_** (object, DEPRECATED): contains the set of options to be used when creating the user. A non-null entry indicates that the user account shall be created. This object has been marked for deprecation, please use the **_users_** level fields instead.
+ * **_uid_** (integer): the user ID of the new account.
+ * **_gecos_** (string): the GECOS field of the new account.
+ * **_homeDir_** (string): the home directory of the new account.
+ * **_noCreateHome_** (boolean): whether or not to create the user's home directory.
+ * **_primaryGroup_** (string): the name or ID of the primary group of the new account.
+ * **_groups_** (list of strings): the list of supplementary groups of the new account.
+ * **_noUserGroup_** (boolean): whether or not to create a group with the same name as the user.
+ * **_noLogInit_** (boolean): whether or not to add the user to the lastlog and faillog databases.
+ * **_shell_** (string): the login shell of the new account.
+ * **_system_** (bool): whether or not to make the user a system user.
+ * **_groups_** (list of objects): the list of groups to be added.
+ * **name** (string): the name of the group.
+ * **_gid_** (integer): the group ID of the new group.
+ * **_passwordHash_** (string): the encrypted password of the new group.
+ * **_system_** (bool): whether or not the group should be a system group. This only has an effect if the group doesn't exist yet.
+
+[part-types]: http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
+[rfc2397]: https://tools.ietf.org/html/rfc2397
+[butane-spec]: https://coreos.github.io/butane/config-flatcar-v1_0/
+[ct-config]: ../config-transpiler/configuration
+[ignition-fs-reuse]: https://github.com/coreos/ignition/blob/main/docs/operator-notes.md#filesystem-reuse-semantics
diff --git a/content/docs/latest/provisioning/sysext/_index.md b/content/docs/latest/provisioning/sysext/_index.md
new file mode 100644
index 00000000..5c5b4f0b
--- /dev/null
+++ b/content/docs/latest/provisioning/sysext/_index.md
@@ -0,0 +1,213 @@
+---
+title: Systemd-sysext
+description: Extending the base OS with systemd-sysext images
+weight: 39
+---
+
+Flatcar Container Linux bundles various software components with fixed versions together into one release.
+For users that require a particular version of a software component this means that the software needs to be supplied out of band and overwrite the built-in software copy.
+In the past Torcx was introduced as a way to switch between Docker versions.
+Another approach we recommended was to [store binaries in `/opt/bin`](../container-runtimes/use-a-custom-docker-or-containerd-version/) and prefer them in the `PATH`.
+
+The systemd project announced the portable services feature to address deploying custom services.
+However, since it only covered the service itself without making the client binaries available on the user, it didn't really fit the use case fully.
+The systemd-sysext feature finally provides a way to extend the base OS with a `/usr` overlay, thereby making custom binaries available to the user.
+While systemd-sysext images are not really good yet at including systemd units, Flatcar ships `ensure-sysext.service` as workaround to automatically load the image's services.
+Systemd-sysext is supported in Flatcar versions ≥ 3185.0.0 for user provided sysext images.
+
+## Torcx deprecation
+
+Since systemd-sysext is a more generic and maintained solution than Torcx, it will replace Torcx. Flatcar releases after major version 3760 will not ship torcx at all.
+Starting from Flatcar version 3185.0.0 we encourage you to migrate any Torcx usage and convert your Torcx image with the `convert_torcx_image.sh` helper script from the [`sysext-bakery`][sysext-bakery] repository, mentioned later in this document.
+
+## The sysext format
+
+Sysext images can be disk image files or simple folders (details in [`man systemd-sysext`](https://www.freedesktop.org/software/systemd/man/systemd-sysext.html)).
+They get loaded by `systemd-sysext.service` which looks for them in `/etc/extensions/` or `/var/lib/extensions` among others.
+An image must be named `NAME.raw` while a plain folder just uses `NAME` as name.
+The image can be a plain ext4 or btrfs filesystem image but squashfs images are a useful format to consider because besides the compression it offers, the `mksquashfs` tool simply takes a directory as input and doesn't need loop devices and mounting of an image file.
+
+Inside the image or folder structure there must be a file `usr/lib/extension-release.d/extension-release.NAME` with metadata used for version matching.
+The basic matching that needs to be there is `ID=flatcar` plus one of `VERSION_ID` or `SYSEXT_LEVEL`.
+If your binaries link against Flatcar's binaries under `/usr`, you must couple your sysext image to the Flatcar version by specyfing `VERSION_ID=MAJOR.MINOR.PATCH` in `extension-release.NAME` to match the `VERSION_ID` field from `/etc/os-release`.
+This means that the sysext image won't be loaded anymore after an OS update.
+Therefore, it is recommended that you try to use static binaries which lifts the requirement of having to couple the versions.
+In this case you can specify `SYSEXT_LEVEL=1.0` instead of `VERSION_ID`.
+The matching semantics for `SYSEXT_LEVEL` are limited at the moment and the use case for bumping the version are not there yet.
+In summary, this is what you will normally write to the metadata file:
+
+```
+ID=flatcar
+SYSEXT_LEVEL=1.0
+```
+
+Then place your binaries under `usr/bin/` and your systemd units under `usr/lib/systemd/system/`.
+While Flatcar currently allows you to enable systemd units by including the symlinks it would generate when enabling the units, e.g., `sockets.target.wants/my.socket` → `../my.socket`, this is not recommended.
+The recommended way is to ship drop-ins for the target units that start your unit, e.g., `usr/lib/systemd/system/sockets.target.d/10-docker-socket.conf` with the following content (similar for `multi-user.target` and a `.service` unit):
+
+```ini
+[Unit]
+Upholds=docker.socket
+```
+
+## Supplying your sysext image from Ignition
+
+The following Butane Config YAML can be be transpiled to Ignition JSON and will download a custom Docker+containerd sysext image on first boot.
+It also takes care of disabling Torcx and future built-in Docker and containerd sysext images we plan to ship in Flatcar (to revert this, you can find the original target of the symlinks in `/usr/share/flatcar/etc/extensions/` - as said, this is not yet shipped).
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/extensions/mydocker.raw
+ mode: 0644
+ contents:
+ source: https://myserver.net/mydocker.raw
+ - path: /etc/systemd/system-generators/torcx-generator
+ links:
+ - path: /etc/extensions/docker-flatcar.raw
+ target: /dev/null
+ overwrite: true
+ - path: /etc/extensions/containerd-flatcar.raw
+ target: /dev/null
+ overwrite: true
+```
+
+After boot you can see it loaded in the output of the `systemd-sysext` command:
+
+```
+HIERARCHY EXTENSIONS SINCE
+/opt none -
+/usr mydocker Wed 2022-03-23 14:16:37 UTC
+```
+
+You can reload the sysext images at runtime by executing `systemctl restart systemd-sysext`.
+In Flatcar this also triggers `ensure-sysext.service` to reload the unit files from disk (in the future this may be covered by `systemd-sysext` itself).
+As an additional workaround, Flatcar currently also reevaluates `multi-user.target`, `sockets.target`, and `timers.target`, to make sure your enabled systemd units run, but for units started by `Upholds=` drop-ins that wouldn't be needed.
+A manual `systemd-sysext refresh` is not recommended.
+
+## Creating custom sysext images
+
+The [`sysext-bakery`][sysext-bakery] repository under the Flatcar GitHub organization serves as a central point for sysext building tools.
+Please reach out if your use case isn't covered and work with us to include it there.
+
+### Upstream Docker sysext images
+
+The Docker releases publish static binaries including containerd and the only missing piece are the systemd units.
+To ease the process, the [`create_docker_sysext.sh`](https://raw.githubusercontent.com/flatcar/sysext-bakery/main/create_docker_sysext.sh) helper script takes care of downloading the release binaries and adding the systemd unit files, and creates a combined Docker+containerd sysext image:
+
+```
+./create_docker_sysext.sh 20.10.13 mydocker
+[… writes mydocker.raw into current directory …]
+```
+
+## Converting a Torcx image
+
+In case you have an existing Torcx image you can convert it with the [`convert_torcx_image.sh`](https://raw.githubusercontent.com/flatcar/sysext-bakery/main/convert_torcx_image.sh) helper script (Currently only Torcx tar balls are supported and the conversion is done on best effort):
+
+```
+./convert_torcx_image.sh TORCXTAR SYSEXTNAME
+[… writes SYSEXTNAME.raw into the current directory …]
+```
+
+Please make also sure that your don't have a `containerd.service` drop in file under `/etc` that uses Torcx paths.
+
+## Updating custom sysext images
+
+From Flatcar 3510.2.0, it is possible to use the `systemd-sysupdate` tool that covers the task of downloading newer versions of your sysext image at runtime from a location you specify.
+
+Here is a long example using Butane, the shorter recommended usage example for consuming `sysext-bakery` images is in the [sysext-bakery README](https://github.com/flatcar/sysext-bakery#consuming-the-published-images):
+```yaml
+# butane < config.yaml > config.json
+# ./flatcar_production_qemu.sh -i ./config.json
+variant: flatcar
+version: 1.0.0
+storage:
+ links:
+ - path: /etc/extensions/docker.raw
+ target: /opt/extensions/docker/docker-24.0.5-x86-64.raw
+ hard: false
+ - path: /etc/extensions/docker-flatcar.raw
+ target: /dev/null
+ overwrite: true
+ - path: /etc/extensions/containerd-flatcar.raw
+ target: /dev/null
+ overwrite: true
+ files:
+ - path: /opt/extensions/docker/docker-24.0.5-x86-64.raw
+ contents:
+ source: https://github.com/flatcar/sysext-bakery/releases/download/20230901/docker-24.0.5-x86-64.raw
+ - path: /etc/systemd/system-generators/torcx-generator
+ - path: /etc/sysupdate.d/noop.conf
+ contents:
+ inline: |
+ [Source]
+ Type=regular-file
+ Path=/
+ MatchPattern=invalid@v.raw
+ [Target]
+ Type=regular-file
+ Path=/
+ - path: /etc/sysupdate.docker.d/docker.conf
+ contents:
+ inline: |
+ [Transfer]
+ Verify=false
+
+ [Source]
+ Type=url-file
+ Path=https://github.com/flatcar/sysext-bakery/releases/latest/download/
+ MatchPattern=docker-@v-%a.raw
+
+ [Target]
+ InstancesMax=3
+ Type=regular-file
+ Path=/opt/extensions/docker
+ CurrentSymlink=/etc/extensions/docker.raw
+systemd:
+ units:
+ - name: systemd-sysupdate.timer
+ enabled: true
+ - name: systemd-sysupdate.service
+ dropins:
+ - name: docker.conf
+ contents: |
+ [Service]
+ ExecStartPre=/usr/lib/systemd/systemd-sysupdate -C docker update
+ - name: sysext.conf
+ contents: |
+ [Service]
+ ExecStartPost=systemctl restart systemd-sysext
+```
+
+This configuration will enable the `systemd-sysupdate.timer` unit that will check every 2-6 hours for a new Docker sysext image available from the latest release of [`sysext-bakery`][sysext-bakery].
+Use `arm64` instead of `x86-64` for arm64 machines.
+
+## Debugging
+
+The `systemd-dissect` tool gives a quick overview for a systemd-sysext image:
+
+```
+sudo systemd-dissect docker-compose.raw
+```
+
+You can list the contents of a systemd-sysext image with the `--list` flag (or `--mtree` for a detailed view):
+
+```
+sudo systemd-dissect --list docker-compose.raw
+```
+
+A single file can be extracted with:
+
+```
+sudo systemd-dissect --with docker-compose.raw cat usr/lib/extension-release.d/extension-release.docker-compose
+```
+
+To get more information about found incompatibilities during merging, enable the debug output:
+
+```
+sudo SYSTEMD_LOG_LEVEL=debug systemd-sysext refresh
+```
+
+[sysext-bakery]: https://github.com/flatcar/sysext-bakery
diff --git a/content/docs/latest/provisioning/terraform/_index.md b/content/docs/latest/provisioning/terraform/_index.md
new file mode 100644
index 00000000..08db6c47
--- /dev/null
+++ b/content/docs/latest/provisioning/terraform/_index.md
@@ -0,0 +1,141 @@
+---
+title: Terraform
+description: Provision Flatcar Container Linux with an Ignition configuration through Terraform
+weight: 40
+---
+
+Flatcar Container Linux fits well with Terraform for the principle of Immutable Infrastructure where you deploy a node and, instead of making changes via SSH, you destroy it and deploy a new node.
+The big advantages compared to other OSes are the inbuilt support for declarative configuration with Ignition on first boot and the automatic OS updates.
+
+Many cloud services allow to provide _User Data_ for a node in an extra attribute. Ignition will fetch the configuration from this place and apply it. No Terraform SSH provisioning commands are needed.
+
+## Terraform Providers for the different Cloud Services
+
+How to use the Terraform Providers of each cloud service is explained on the respective documentation page under [Cloud Providers][cloud].
+
+## Changing the Ignition Configuration
+
+Changes to the User Data attribute should normally destroy the node and recreate it so that Ignition runs again on the first boot.
+However, this behavior depends on the Terraform provider for your cloud service.
+Some cloud services allow to update the attribute in-place without destroying the node but Ignition won't run again by default and even if you triggered it, you would have to be careful as this is not recommended (more on this at the end of this document).
+You can also tell Terraform not ignore attribute changes (set `ignore_changes`) and thus delay the change until you manually destroy the node and recreate it with Terraform.
+This is sometimes useful but may be a source of errors when you don't know that a node still runs an old configuration.
+
+To make the recreation of a node less disruptive, you can architect your setup to accept a node to exist twice at the same time and set `create_before_destroy` to let Terraform first create the replacement node and then destroy the old node.
+On AWS, it's also possible to use Auto Scaling Groups instead of directly operating on instances. In this case, to update the User Data you can replace the Auto Scaling Group, and the new one will take care of creating new nodes and the old one deletes the old ones.
+An advanced scheme of this on AWS can use Auto Scaling Groups instead of directly operating on instances, and to change the User Data you replace the Auto Scaling Group and the new one takes care of creating new nodes and the old one deletes the old ones.
+It is also advisable to separate the persistent data from the disposable nodes to external data volumes or to use a backup mechanism and inject the backup on the first boot.
+
+## Generating the Ignition Configuration within Terraform
+
+To convert the Butane Config in YAML to the final Igniton Config in JSON you don't need to run [`butane`][butane-configs] manually. Instead, you can directly do this within Terraform, through the [`terraform-ct-provider`][terraform-ct-provider] (starting from v0.12.0).
+Combined with the `template-provider` you can reference Terraform variables in the YAML template.
+
+An alternative is the [`terraform-ignition-provider`][terraform-ignition-provider] that allows to assemble the Ignition Config from Terraform declarations.
+
+The following snippet demonstrates the use of the `terraform-ct-provider` and the `template-provider` to specify the User Data attribute on a Packet (now Equinix Metal) instance:
+
+```
+resource "packet_device" "machine" {
+ operating_system = "flatcar_stable"
+ user_data = data.ct_config.machine-ignition.rendered
+ [...]
+}
+
+data "ct_config" "machine-ignition" {
+ content = data.template_file.machine-cl-config.rendered
+}
+
+data "template_file" "machine-cl-config" {
+ template = file("${path.module}/machine.yaml.tmpl")
+ vars = { something = var.something }
+}
+```
+
+When using a template be careful to refer to the Terraform variables via `${variable}` while shell variables in scripts used at OS runtime need to be quoted like `$${variable}` or need to use the `$variable` syntax.
+
+## Updating the User Data in-place and rerunning Ignition instead of destroying nodes
+
+Sometimes you want to take the declarative approach of Terraform but can't accept that nodes are destroyed and recreated for configuration changes.
+This is the case for nodes that have a manual or slow bring-up process, much data that can't be moved easily, or where the IP address should not change.
+
+Ignition can be told to run again through `flatcar-reset` (available since Alpha 3535.0.0), which also takes care of cleaning up old rootfs state and keeping only the rootfs data you want to keep.
+
+A Terraform setup that runs `flatcar-reset` when the cloud instance userdata changes can be found in the [flatcar-terraform repository][example-repo].
+The list of paths in the rootfs to keep can be configured.
+
+### Reformatting
+
+Another alternative to `flatcar-reset` is to reformat the root filesystem with Ignition to ensure that no old state is present, or use cloud-provider [reinstall options like on Equinix Metal](https://registry.terraform.io/providers/equinix/equinix/latest/docs/resources/equinix_metal_device#reinstall).
+Persistent data should be stored on another partition which should be set to be kept.
+
+We can also preserve the machine ID by setting it as kernel cmdline parameter (it must not be kept as file on the root filesystem because that prevents the systemd first-boot semantics to enable units through the preset Ignition creates).
+
+This Container Linux Config snippet takes care of reformating the root filesystem and places a reprovisioning helper script on the OEM partition:
+
+```yaml
+storage:
+ files:
+ - path: /reprovision
+ filesystem: oem
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/bash
+ set -euo pipefail
+ touch /usr/share/oem/grub.cfg
+ sed -i "/linux_append systemd.machine_id=.*/d" /usr/share/oem/grub.cfg
+ echo "set linux_append=\"\$linux_append systemd.machine_id=$(cat /etc/machine-id)\"" >> /usr/share/oem/grub.cfg
+ touch /boot/flatcar/first_boot
+ filesystems:
+ - name: root
+ mount:
+ device: /dev/disk/by-label/ROOT
+ format: ext4
+ wipe_filesystem: true
+ label: ROOT
+ - name: oem
+ mount:
+ device: /dev/disk/by-label/OEM
+ format: btrfs
+ label: OEM
+```
+
+The final User Data needs to be stored on a place where modifications are allowed without destroying the node.
+For some Terraform providers it is directly possible to allow changes but that is not always the case.
+A good option then could be AWS S3 or other similar cloud storage solutions.
+The real User Data of the node is just an Ignition Config that references the external User Data:
+
+```
+{ "ignition": { "version": "2.1.0", "config": { "replace": { "source": "s3://..." } } } }
+```
+
+Under these conditions it is possible to run `sudo /usr/share/oem/reprovision` on the node and trigger reboot for the new Ignition Config to take effect (assuming data in S3):
+
+```
+resource "null_resource" "reboot-when-ignition-changes" {
+ for_each = toset(var.machines)
+ # Triggered when the Ignition Config changes
+ triggers = {
+ ignition_config = data.ct_config.machine-ignitions[each.key].rendered
+ }
+ # Wait for the new Ignition config object to be ready before rebooting
+ depends_on = [aws_s3_bucket_object.object]
+ # Trigger running Ignition on the next reboot and reboot the instance (current limitation: also runs on the first provisioning)
+ provisioner "local-exec" {
+ command = "while ! ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@${packet_device.machine[each.key].access_public_ipv4} sudo /usr/share/oem/reprovision ; do sleep 1; done; while ! ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@${packet_device.machine[each.key].access_public_ipv4} sudo systemctl reboot; do sleep 1; done"
+ }
+}
+```
+
+## Examples
+
+You can find the full code for working examples in this [git repository][example-repo].
+
+
+[cloud]: ../../installing/cloud/
+[butane-configs]: ../config-transpiler
+[terraform-ct-provider]: https://registry.terraform.io/providers/poseidon/ct/latest
+[terraform-ignition-provider]: https://registry.terraform.io/providers/community-terraform-providers/ignition/latest
+[boot-process]: ../ignition/boot-process/#reprovisioning
+[example-repo]: https://github.com/flatcar/flatcar-terraform
diff --git a/content/docs/latest/provisioning/torcx/_index.md b/content/docs/latest/provisioning/torcx/_index.md
new file mode 100644
index 00000000..919b6f94
--- /dev/null
+++ b/content/docs/latest/provisioning/torcx/_index.md
@@ -0,0 +1,46 @@
+---
+title: "[DEPRECATED / EOL] Torcx"
+description: Addon manager for applying ephemeral changes
+weight: 100
+aliases:
+ - ../os/torcx-overview
+ - ../torcx
+---
+
+## Deprecation Notice
+
+As of 2023, torcx on Flatcar is in deprecation and is in the process of being replaced by [systemd-sysext][sysext].
+
+**Releases after major version 3760 do not ship torcx. If you are using torcx for managing add-ons please migrate to sysext before upgrading to a major release higher than 3760.**
+
+## Torcx overview
+
+[Torcx][gh-torcx] is a boot-time addon manager designed specifically for container OSs like Flatcar Container Linux. At the most basic level, it is a tool for applying ephemeral changes to an immutable system during early boot. This includes providing third-party binary addons and installing systemd units, which can vary across environments and boots. On every boot, Torcx reads its configuration from local disk and propagates specific assets provided by addon packages (which must be available in local stores).
+
+Torcx complements both the [Ignition][ignition] provisioning utility and [systemd][systemd]. Torcx allows customization of Flatcar Container Linux systems without requiring the compilation of custom system images. This goal is achieved by following two main principles: customizations are ephemeral, and they are applied exactly once per boot. Torcx also has a very simple design, with the aim of providing a small low-level system utility which can be driven by more advanced and higher-level tools.
+
+### Torcx execution model and systemd generators
+
+Early in the boot process, execution starts in a minimal initramfs environment where systemd, Ignition, and other boot utilities run. Once up, execution continues by pivoting into the real root file system and by running all [systemd generators][systemd-generator], including the main torcx component, `torcx-generator`.
+`torcx-generator` runs serially before any other service starts to guarantee it does not race with other startup processes. However, this restricts Torcx to using only local resources. Torcx cannot access configuration or addons from remote file systems or network locations.
+
+### Profiles and addons
+
+Torcx customizations are applied via local addon packages, which are referenced by profiles. Addons are simple tar-gzipped archives containing binary assets and a manifest. A user profile (upper profile) can be supplied by the administrator to be merged on top of hard-coded vendor and OEM profiles (lower profiles). Torcx will take care of computing and applying the resulting list of addons on the system.
+
+### Boot-time customizations
+
+Torcx guarantees that customizations are applied at most once per boot, before any other service has been considered for startup. This provides a mechanism to customize most aspects of a Flatcar Container Linux system in a reliable way, and avoids runtime upgrading/downgrading issues. Changes applied by Torcx are not persisted to disk, and therefore last exactly for the lifetime of a single boot of an instance.
+
+By the same token, this should be read as a warning against abusing Torcx in the role of a general purpose container, service, or package manager. Torcx's boot-transient model consumes memory with each addon, and, worse, would require system reboots for even simple upgrades.
+
+## Further design details
+
+For further details on design and goals, Torcx repository contains extensive [developer documentation][devdocs].
+
+[gh-torcx]: https://github.com/flatcar/torcx
+[ignition]: ../ignition
+[sysext]: ../sysext
+[systemd]: https://www.freedesktop.org/wiki/Software/systemd/
+[systemd-generator]: http://www.freedesktop.org/software/systemd/man/systemd.generator.html
+[devdocs]: https://github.com/flatcar/torcx/blob/master/Documentation
diff --git a/content/docs/latest/provisioning/torcx/metadata-and-systemd-target.md b/content/docs/latest/provisioning/torcx/metadata-and-systemd-target.md
new file mode 100644
index 00000000..19b5a4ce
--- /dev/null
+++ b/content/docs/latest/provisioning/torcx/metadata-and-systemd-target.md
@@ -0,0 +1,83 @@
+---
+title: Torcx metadata and systemd target
+linktitle: Metadata and systemd target
+weight: 10
+aliases:
+ - ../../os/torcx-metadata-and-systemd-target
+ - ../../torcx/torcx-metadata-and-systemd-target
+---
+
+In many cases, it is desirable to inspect the state of a system booted with Torcx and to verify the details of the configuration that has been applied.
+For this purpose, Torcx comes with additional facilities to integrate with systemd-based workflows: a custom target and a metadata file containing environment flags.
+
+## Metadata entries and environment flags
+
+In order to signal a successful run, Torcx writes a metadata file at most once per boot. The format of this file is suitable for consumption by the systemd `EnvironmentFile=` [directive][systemd-exec] and can be used to introspect the booted configuration at runtime.
+
+The metadata file is written to `/run/metadata/torcx` and contains a list of key-value pairs:
+
+```shell
+$ cat /run/metadata/torcx
+
+TORCX_LOWER_PROFILES="vendor"
+TORCX_UPPER_PROFILE="custom-demo"
+TORCX_PROFILE_PATH="/run/torcx/profile.json"
+TORCX_BINDIR="/run/torcx/bin"
+TORCX_UNPACKDIR="/run/torcx/unpack"
+```
+
+These values can be used to detect where assets have been unpacked and propagated (shown above as "unpack" and "bin" entries), which profiles have been sourced (both vendor- and user-provided), and what is the resulting profile that has been applied.
+
+Finally, the runtime profile can be inspected to detect which addons (and versions) are currently applied:
+
+```shell
+$ cat /run/torcx/profile.json
+
+{
+ "kind": "profile-manifest-v0",
+ "value": {
+ "images": []
+ }
+}
+```
+
+## Torcx target unit
+
+System services may depend on successful execution of Torcx generator. As such, `torcx.target` is provided as a target unit which is only reachable if the generator successfully ran and sealed the system.
+
+This target is not enabled by default, but can be referenced as a dependency by other units who want to introspect system status:
+
+```shell
+$ sudo systemctl cat torcx-echo.service
+
+[Unit]
+Description=Sample unit relying on torcx run
+After=torcx.target
+Requires=torcx.target
+
+[Service]
+EnvironmentFile=/run/metadata/torcx
+Type=oneshot
+ExecStart=/usr/bin/echo "torcx: applied ${TORCX_UPPER_PROFILE}"
+
+[Install]
+WantedBy=multi-user.target
+```
+
+```shell
+$ sudo systemctl status torcx.target
+
+● torcx.target - Verify torcx succeeded
+ Loaded: loaded (/usr/lib/systemd/system/torcx.target; disabled; vendor preset: disabled)
+ Active: active since [...]
+```
+
+```shell
+$ sudo journalctl -u torcx-echo.service
+
+localhost systemd[1]: Starting Sample unit relying on torcx run...
+localhost echo[756]: torcx: applied custom-demo
+localhost systemd[1]: Started Sample unit relying on torcx run.
+```
+
+[systemd-exec]: https://www.freedesktop.org/software/systemd/man/systemd.exec.html#EnvironmentFile=
diff --git a/content/docs/latest/provisioning/torcx/troubleshooting.md b/content/docs/latest/provisioning/torcx/troubleshooting.md
new file mode 100644
index 00000000..681c99fe
--- /dev/null
+++ b/content/docs/latest/provisioning/torcx/troubleshooting.md
@@ -0,0 +1,47 @@
+---
+title: Troubleshooting Torcx
+linktitle: Troubleshooting
+weight: 20
+aliases:
+ - ../../os/torcx-troubleshooting
+ - ../../torcx/torcx-troubleshooting
+---
+
+Torcx generator runs early in the boot, when other system facilities are not yet set up and available for use. In case of errors, troubleshooting and debugging can be performed following the suggestions described here.
+
+## Checking for failures
+
+In case of errors, Torcx stops before sealing the new system state. This means that in order to check for correct execution, it is sufficient to verify that the metadata file exists:
+
+```shell
+test -f /run/metadata/torcx || echo 'torcx failed'
+```
+
+On failures, the metadata seal file will not exist, and `torcx failed` will be printed. Verify failure at boot time using the `torcx.target` unit:
+
+```shell
+$ sudo systemctl start torcx.target ; sudo systemctl status torcx.target
+
+Assertion failed on job for torcx.target.
+
+* torcx.target - Verify torcx succeeded
+ Loaded: loaded (/usr/lib/systemd/system/torcx.target; disabled; vendor preset: disabled)
+ Active: inactive (dead) since [...]
+ Assert: start assertion failed at [...]
+ AssertPathExists=/run/metadata/torcx was not met
+```
+
+## Gathering logs
+
+The single most useful piece of information needed when troubleshooting failure is the log from `torcx-generator`. This binary does not run as a typical systemd service, thus log filtering must be done via its syslog identifier.
+With systemd-journald, this can be accomplished with the following command:
+
+```shell
+journalctl --boot 0 --identifier /usr/lib64/systemd/system-generators/torcx-generator
+```
+
+If this doesn't yield results, run as root. There may be instances in which the journal isn't owned by the systemd-journal group, or the current user is not part of that group.
+
+## Validating the configuration
+
+One common cause for Torcx failure is a malformed configuration (such as a mis-assembled profile, or a syntax error). In other cases, the active profile might reference addon images which are no longer available on the system.
diff --git a/content/docs/latest/reference/_index.md b/content/docs/latest/reference/_index.md
new file mode 100644
index 00000000..31fd786a
--- /dev/null
+++ b/content/docs/latest/reference/_index.md
@@ -0,0 +1,7 @@
+---
+content_type: reference
+title: Reference
+description: >
+ Processes, concepts, APIs and troubleshooting guides for working with Flatcar Container Linux.
+weight: 120
+---
diff --git a/content/docs/latest/reference/constants-and-ids.md b/content/docs/latest/reference/constants-and-ids.md
new file mode 100644
index 00000000..15addae6
--- /dev/null
+++ b/content/docs/latest/reference/constants-and-ids.md
@@ -0,0 +1,34 @@
+---
+title: Constants and IDs
+weight: 10
+aliases:
+ - ../os/constants-and-ids
+---
+
+This document contains well-known constants and IDs used by Flatcar Container Linux.
+
+## Omaha application ID
+
+This UUID is used to identify Container Linux to the update service, i.e. as an `appid` over the [Omaha protocol][omaha].
+
+| Label | Value | Notes |
+|------------------|----------------------------------------|-------|
+| Container Linux | `e96281a6-d1af-4bde-9a0a-97b76e56dc57` | - |
+
+## GPT partition types
+
+These GUIDs are dedicated [GPT partition types][GPT-types] for specific Container Linux usages.
+
+| Label | Value | Notes |
+|--------------------|----------------------------------------|-------|
+| `coreos-usr` | `5dfbf5f4-2848-4bac-aa5e-0d9a20b745a6` | Alias for historical `coreos-rootfs`, currently used for `/usr` only |
+| `coreos-resize` | `3884dd41-8582-4404-b9a8-e9b84f2df50e` | Support for auto-resizing via `extend-filesystems`, current default type for `/` |
+| `coreos-reserved` | `c95dc21a-df0e-4340-8d7b-26cbfa9a03e0` | Reserved for OEM usage, support for customizations via `OEM-CONFIG` partition |
+| `coreos-root-raid` | `be9067b9-ea49-4f15-b4f6-f36f8c9e1818` | RAID partition containing a rootfs, see [notes][raid-storage] for details and limitations |
+
+For more information on the partitioning scheme used by Flatcar Container Linux, read the [disk layout][disk-layout] documentation.
+
+[omaha]: https://github.com/google/omaha/
+[GPT-types]: https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs
+[raid-storage]: ../setup/storage/raid
+[disk-layout]: ../developer-guides/sdk-disk-partitions
diff --git a/content/docs/latest/reference/developer-guides/_index.md b/content/docs/latest/reference/developer-guides/_index.md
new file mode 100644
index 00000000..5706cd2c
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/_index.md
@@ -0,0 +1,26 @@
+---
+title: Developer Guides
+weight: 10
+aliases:
+ - ../os/developer-guides
+---
+
+This section is aimed at curious developers interested in building Flatcar Container Linux from source and/or in modifying the OS.
+We provide a containerised SDK that allows you to extend Flatcar and to build your own OS images.
+We also provide OEM functionality for cloud providers and similar use cases to customize Flatcar Container Linux to run within their environment.
+
+* [Guide to building custom Flatcar images from source][mod-cl]
+* [Vending production images / CI integration][production-images]
+* [Building custom kernel modules][kernel-modules]
+* [SDK tips and tricks][sdk-tips]
+* [SDK build process][sdk-bootstrapping]
+* [Disk layout][disk-layout]
+* [Kola integration testing framework][mantle-utils]
+
+[sdk-tips]: sdk-tips-and-tricks
+[disk-layout]: sdk-disk-partitions
+[production-images]: sdk-building-production-images
+[mod-cl]: sdk-modifying-flatcar
+[kernel-modules]: kernel-modules
+[sdk-bootstrapping]: sdk-bootstrapping
+[mantle-utils]: https://github.com/flatcar/mantle/blob/flatcar-master/README.md#kola
diff --git a/content/docs/latest/reference/developer-guides/kernel-modules.md b/content/docs/latest/reference/developer-guides/kernel-modules.md
new file mode 100644
index 00000000..375be973
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/kernel-modules.md
@@ -0,0 +1,95 @@
+---
+title: Building custom kernel modules
+weight: 10
+aliases:
+ - ../../os/kernel-modules
+---
+
+## Create a writable overlay
+
+The kernel modules directory `/usr/lib64/modules` is read-only on Flatcar Container Linux. A writable overlay can be mounted over it to allow installing new modules.
+
+```shell
+modules=/opt/modules # Adjust this writable storage location as needed.
+sudo mkdir -p "${modules}" "${modules}.wd"
+sudo mount \
+ -o "lowerdir=/usr/lib64/modules,upperdir=${modules},workdir=${modules}.wd" \
+ -t overlay overlay /usr/lib64/modules
+```
+
+The following systemd unit can be written to `/etc/systemd/system/usr-lib64-modules.mount`.
+
+```ini
+[Unit]
+Description=Custom Kernel Modules
+Before=local-fs.target
+ConditionPathExists=/opt/modules
+
+[Mount]
+Type=overlay
+What=overlay
+Where=/usr/lib64/modules
+Options=lowerdir=/usr/lib64/modules,upperdir=/opt/modules,workdir=/opt/modules.wd
+
+[Install]
+WantedBy=local-fs.target
+```
+
+Enable the unit so this overlay is mounted automatically on boot.
+
+```shell
+sudo systemctl enable usr-lib64-modules.mount
+```
+
+An alternative is to mount the overlay automatically when the system boots by adding the following line to `/etc/fstab` (creating it if necessary).
+
+```fstab
+overlay /lib/modules overlay lowerdir=/lib/modules,upperdir=/opt/modules,workdir=/opt/modules.wd,nofail 0 0
+```
+
+## Prepare a Flatcar Container Linux development container
+
+Read system configuration files to determine the URL of the development container that corresponds to the current Flatcar Container Linux version.
+
+```shell
+. /usr/share/flatcar/release
+. /usr/share/flatcar/update.conf
+url="https://${GROUP:-stable}.release.flatcar-linux.net/${FLATCAR_RELEASE_BOARD}/${FLATCAR_RELEASE_VERSION}/flatcar_developer_container.bin.bz2"
+```
+
+Download, decompress, and verify the development container image.
+
+```shell
+curl -f -L -O https://www.flatcar.org/security/image-signing-key/Flatcar_Image_Signing_Key.asc
+gpg2 --import Flatcar_Image_Signing_Key.asc
+curl -L "${url}" |
+ tee >(bzip2 -d > flatcar_developer_container.bin) |
+ gpg2 --verify <(curl -Ls "${url}.sig") -
+```
+
+Start the development container with the host's writable modules directory mounted into place.
+
+```shell
+sudo systemd-nspawn \
+ --bind=/usr/lib64/modules \
+ --image=flatcar_developer_container.bin
+```
+
+Now, inside the container, fetch the Flatcar Container Linux package definitions, then download and prepare the Linux kernel source for building external modules.
+
+```shell
+emerge-gitclone
+emerge -gKv coreos-sources
+gzip -cd /proc/config.gz > /usr/src/linux/.config
+make -C /usr/src/linux modules_prepare
+```
+
+## Build and install kernel modules
+
+At this point, upstream projects' instructions for building their out-of-tree modules should work in the Flatcar Container Linux development container. New kernel modules should be installed into `/usr/lib64/modules`, which is bind-mounted from the host, so they will be available on future boots without using the container again.
+
+In case the installation step didn't update the module dependency files automatically, running the following command will ensure commands like `modprobe` function correctly with the new modules.
+
+```shell
+sudo depmod
+```
diff --git a/content/docs/latest/reference/developer-guides/sdk-bootstrapping.md b/content/docs/latest/reference/developer-guides/sdk-bootstrapping.md
new file mode 100644
index 00000000..1932a68b
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/sdk-bootstrapping.md
@@ -0,0 +1,62 @@
+## SDK bootstrap process
+
+This document aims to provide a high-level overview of the SDK build ("bootstrap") process.
+
+SDK bootstrapping is implemented in `bootstrap_sdk` and happens in 4 stages. Gentoo's catalyst is used to run each of the stages in an isolated chroot. Each stage requires a "seed" tarball which contains the root filesystem used by that stage. The first stage will use an existing SDK as its seed (i.e. root FS); stages 2 to 4 will use the output of the previous stage as its seed (root FS).
+
+### SDK bootstrap phases
+
+The SDK bootstrap will use a previous SDK release for bootstrapping. By default, the version of the SDK is used in which the `bootstrap_sdk` script is run.
+
+Each stage will
+1. unpack the seed
+2. bind-mount package repositories to the seed, mount proc
+3. copy the stage's script into the unpacked seed
+4. chroot into the seed
+5. run the stage's script, creating the root FS for the next step in `/tmp/stageXroot` (with X bein 1 to 4, depending on stage)
+6. clean up and archive the stage's `/tmp/stageXroot` to be used as the succeeding stage's seed
+
+The output of the 4th stage, i.e. the archived contents of `/tmp/stage4root`, resembles the full-built SDK.
+
+#### Stage 1 - Create minimal toolchain to bootstrap the SDK toolchain
+
+Stage 1 is somewhat of a preparation phase and does not actually involve any components to be found in the final SDK. This stage takes the seed tarball - which must be a previously released Flatcar SDK - and builds a minimal toolchain from the seed (with `USE=-*`).
+
+**NOTE**
+* this toolchain, i.e. the output of Stage 1, will be built from the "old" package versions from the seed SDK. Contents of `../third_party/coreos-overlay` and `../third_party/portage-stable` are ignored in this step. Instead, the ebuild repos included in the seed SDK are used (**FIXME: Not entirely true yet**)
+* Stage 1 does _not_ feature strong library link isolation. All packages installed to `/tmp/stage1root` will be linked against libraries in `/` instead of libraries in `/tmp/stage1root`. Therefore, Stage 1 only uses the "old" seed SDK's package versions when building the seed for Stage 2.
+
+#### Stage 2 - Build the toolchain that builds the SDK
+
+Stage 2 uses the minimal (but potentially outdated) toolchain from Stage 1 to build a full-featured (and potentially updated) toolchain used in Stage 3 for building the actual SDK. Stage 2, contrary to Stage 1, offers strong library link isolation - everything installed to `/tmp/stage2root` is linked against libraries in `/tmp/stage2root`.
+
+Stage 2 utilises a (slightly modified) [bootstrap.sh](https://github.com/flatcar/scripts/blob/main/sdk_container/src/third_party/portage-stable/scripts/bootstrap.sh) - the script upstream Gentoo uses to bootstrap a Gentoo distribution.
+
+
+#### Stage 3 - Build the base OS
+
+Stage 3 runs `emerge @world` to build the base OS into `/tmp/stage3root`.
+
+#### Stage 4 - Build additional SDK dependencies and cross-compiler toolchains
+
+Stage 4 builds all additional dependencies of the SDK (from [coreos-devel/sdk-depends](https://github.com/flatcar/scripts/tree/main/sdk_container/src/third_party/coreos-overlay/coreos-devel/sdk-depends)) that were not included in the base OS packages built in Stage 3. Stage 4 also builds the ARM and x86 cross-compiler toolchains included with the SDK. Finally, Stage 4 archives the portage-stable and coreos-overlay repos used to build this stage, for use in future Stage 1s (see above).
+
+The output of Stage 4 is a full-featured SDK tarball.
+
+
+## Tips and tricks
+
+Some helpful notes when working with `bootstrap_sdk` in development.
+
+### Continue an aborted SDK build
+
+Using the `--version` command line flag you can continue an SDK build which was
+previously aborted, e.g. after fixing an issue that caused the abort:
+
+```
+~/trunk/src/scripts $ sudo ./bootstrap_sdk --version <[release-ID]+[timestamp]>
+```
+e.g.
+```
+~/trunk/src/scripts $ sudo ./bootstrap_sdk --version 2783.0.0+2021-02-26-1321
+```
diff --git a/content/docs/latest/reference/developer-guides/sdk-building-production-images.md b/content/docs/latest/reference/developer-guides/sdk-building-production-images.md
new file mode 100644
index 00000000..83f2c9bf
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/sdk-building-production-images.md
@@ -0,0 +1,143 @@
+---
+title: Building production images
+weight: 10
+aliases:
+ - ../../os/sdk-building-production-images
+---
+
+
+## Introduction
+
+This guide discusses automating the OS build process and is aimed at audiences comfortable with producing, testing, and distributing their very own Flatcar releases. For this purpose we'll have a closer look at the CI automation stubs provided in the [scripts repository][scripts-repo-ci].
+
+It is assumed that readers are familiar with the [SDK][mod-cl] and the general build process outlined in the [CI automation][scripts-repo-ci].
+
+
+## Stabilisation process and versioning
+
+The Flatcar OS version number follows the pattern MMMM.m.p[ppp] - "M" being the major number, "m" the minor, and "p" the patch level.
+Specifically:
+- A new major version number is introduced with every new Alpha release.
+- The minor version denotes the stabilisation level; 0 means Alpha, 1 is Beta, and 2 is Stable.
+- The patch level denotes incremental releases within the same channel and is incremented e.g. to address issues before promoting a major version to the next stabilisation phase.
+
+Roughly, every second Alpha major release goes Beta, and every second Beta major release goes stable.
+A notable exception is the "3033" major release used in the example below; this release shipped ARM64 support and was moved to Stable faster than usual.
+This allows for swift iterations in the "Alpha" channel while keeping Stable major releases … well … stable, and ensuring new major Stable releases introduce meaningful sets of updates and new features.
+
+A good way to look at releases and stabilisation through channels is to consider major releases as branches from "main", while Alpha, Beta and Stable releases are distinct points in the lifecycle of a release branch:
+```
+ main
+ ...
+ +-- Alpha-2983.0.0
+ | +-- Beta-2983.1.0
+ | +--- Stable-2983.2.0
+ | +--- Stable-2983.2.1
+ |
+ +-- Alpha-3005.0.0
+ +-- Alpha-3005.0.1
+ |
+ +-- Alpha-3033.0.0
+ | +-- Beta-3033.1.0
+ | +-- Beta-3033.1.1
+ | +--- Stable-3033.2.0
+ |
+ +-- Alpha-3046.0.0
+ |
+ +-- Alpha-3066.0.0
+ | +-- Beta-3066.1.0
+ ...
+```
+
+
+## On versioning
+
+For Flatcar versioning, the scripts repo is authoritative:
+Versioning is controlled by the [`version.txt` file in the scripts repo](https://github.com/flatcar/scripts/blob/main/sdk_container/.repo/manifests/version.txt).
+`version.txt` contains version strings for both the SDK version as well as the OS image version.
+
+Core idea is that a simple
+```shell
+git checkout 3033.2.0
+```
+will set up the scripts repo for development on top of Flatcar release `3033.2.0`.
+
+Keeping `version.txt` in sync, updating version strings, and generating version tags is one of the main concerns of the build automation scripts.
+Running a new build via the CI automation will *always* generate a new version.
+This can be non-production version, e.g. nightly build or a PR or branch build - in which case it should be given a suffix following the `MMMM.mm.pp` number.
+The official Flatcar CI uses `-nightly-YYYYMMDD-hhmm` as suffix for nightly builds, e.g. the tag `alpha-3066.0.0-nightly-20221231-0139` would refer to the nightly of the 31st of December, 2021.
+However, custom CI implementations may freely choose to use different suffixes.
+
+
+Version information is a mandatory parameter which the CI implementation must feed into the CI automation scripts. Two of the build steps (detailed on below) take version parameters:
+- The SDK bootstrap takes a version string that is used for both the new SDK as well as the downstream OS image version.
+- The OS packages build step takes a version string that is used for the new OS image version.
+
+Both scripts will, based on a given version string:
+1. check out a respective version tag in both `coreos-overlay` and `portage-stable`
+2. update the `version.txt` file accordingly
+3. create a new commit with the above changes
+4. tag the commit with the version string and push the tag.
+
+
+## Build automation and build steps
+
+The Flatcar Container Linux build process consists of
+
+1. compiling packages from source, and generating a new OS image release version / tag
+2. creating a generic OS image file from the resulting binary packages
+3. creating one or more vendor-specific image files from the generic OS image.
+
+
+Optionally, the build process may include building the SDK from scratch based on a previous - existing - SDK, e.g. to update core build tools and utilities.
+In that case, the above 3 steps are preceded by
+
+1. Compile all core and SDK packages from source to generate a new SDK root FS and build a tarball from that; generate a new SDK release version and set the OS release to the same (SDK) version.
+2. Build a base SDK container image using the tarball from 1.
+ 1. build amd64 and arm64 toolchains and related board support
+ 2. then, from the image from 2. i., generate from scratch 3 container images - "all", "amd64", and "arm64" with the respective board support included.
+
+
+Running all 5 steps in one go will produce a new SDK and new OS image based on that new SDK.
+In this pipeline, both a new SDK version as well as a new OS image version are generated.
+This is what we call a "full" (or "all-full") build.
+The main use case is for nightly builds of the "main" branches where development of new features happen.
+A new major version release will also use this process (and can be seen as a "special case" of a nightly build of "main").
+New major releases always include a new SDK.
+
+Running only the 3 OS image steps is used for active (i.e. supported) release branches.
+This uses an existing SDK and thus only generates a new OS image version.
+Usually, stabilisation of a major release (alpha -> beta -> stable) uses the same SDK release during its lifetime, so there's no need to always build the SDK.
+Only in rare cases it is necessary to update the SDK after a new major version has been published.
+
+
+### Automation scripts
+
+The [build automation scripts][scripts-repo-ci] reflect the 5 steps outlined above; each step is done in a separate script.
+Check out the build automation's `README.md` to get an overview.
+Each of the scripts contains documentation of the inputs and outputs of the respective build step:
+
+1. [`sdk_bootstrap.sh`](https://github.com/flatcar/scripts/blob/main/ci-automation/sdk_bootstrap.sh) builds a new SDK tarball from scratch
+2. [`sdk_container.sh`](https://github.com/flatcar/scripts/blob/main/ci-automation/sdk_container.sh) builds an SDK container image from a tarball
+3. [`packages.sh`](https://github.com/flatcar/scripts/blob/main/ci-automation/packages.sh) builds all binary packages for an OS image
+4. [`image.sh`](https://github.com/flatcar/scripts/blob/main/ci-automation/image.sh) builds a generic OS image
+5. [`vms.sh`](https://github.com/flatcar/scripts/blob/main/ci-automation/vms.sh) builds vendor-specific images
+
+CI / build automation infrastructure should set up the steps in a build pipeline.
+Artifacts of a preceding build step are fed into the succeeding step.
+The scripts are meant to implement build logic in a CI agnostic manner; the concrete CI system used (Jenkins, Bamboo, etc.) should require only very minimal glue logic to run builds.
+
+The build scripts should run on most Linux-based nodes out of the box; `git` and `docker` are the only requirements.
+In the Flatcar project, we use Flatcar Container Linux on our CI worker nodes.
+
+
+### Auxiliary infrastructure
+
+Apart from infrastructure to run the CI / builds on we also need a server for caching build artifacts.
+Build artifacts are mostly container images - with only few exceptions - and are almost always huge (some gigabytes).
+To not overly pollute CI workers' disk space, the build scripts support an "artifact cache" server.
+Requirements for this server are rather simple - it should have sufficient disk space (we use 7TB on Flatcar's CI and can hold ~50 past builds), ssh access (for rsync) and serve artifacts from the (rsync/ssh) path prefix via HTTPS.
+See the `BUILDCACHE_…` settings in the [CI automation settings file](https://github.com/flatcar/scripts/blob/main/ci-automation/ci-config.env) for adapting the build scripts to your environment.
+
+[scripts-repo-ci]: https://github.com/flatcar/scripts/tree/main/ci-automation
+[mod-cl]: sdk-modifying-flatcar
diff --git a/content/docs/latest/reference/developer-guides/sdk-disk-partitions.md b/content/docs/latest/reference/developer-guides/sdk-disk-partitions.md
new file mode 100644
index 00000000..9e8472ad
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/sdk-disk-partitions.md
@@ -0,0 +1,59 @@
+---
+title: Flatcar Container Linux disk layout
+weight: 10
+aliases:
+ - ../../os/sdk-disk-partitions
+---
+
+Flatcar Container Linux is designed to be reliably updated via a continuous stream of updates. The operating system has 9 different disk partitions, utilizing a subset of those to make each update safe and enable a roll-back to a previous version if anything goes wrong.
+
+## Partition table
+
+| Number | Label | Description | Partition Type |
+|:------:|------------|-------------------------------------------------------------------|-----------------------|
+| 1 | EFI-SYSTEM | Contains the bootloader | FAT32 |
+| 2 | BIOS-BOOT | Contains the second stages of GRUB for use when booting from BIOS | grub core.img |
+| 3 | USR-A | One of two active/passive partitions holding Flatcar Container Linux | EXT2 |
+| 4 | USR-B | One of two active/passive partitions holding Flatcar Container Linux | (empty on first boot) |
+| 5 | ROOT-C | This partition is reserved for future use | (none) |
+| 6 | OEM | Stores configuration data specific to an [OEM platform][OEM docs] | BTRFS |
+| 7 | OEM-CONFIG | Optional storage for an OEM | (defined by OEM) |
+| 8 | (unused) | This partition is reserved for future use | (none) |
+| 9 | ROOT | Stateful partition for storing persistent data | EXT4, BTRFS, or XFS |
+
+For more information, [read more about the disk layout][chromium disk format] used by Chromium and ChromeOS, which inspired the layout used by Flatcar Container Linux.
+
+[OEM docs]: ../../installing/community-platforms/notes-for-distributors
+[chromium disk format]: https://chromium.googlesource.com/chromiumos/docs/+/HEAD/disk_format.md
+
+## Mounted filesystems
+
+Flatcar Container Linux is divided into two main filesystems, a read-only `/usr` and a stateful read/write `/`.
+
+### Read-only /usr
+
+The `USR-A` or `USR-B` partitions are interchangeable and one of the two is mounted as a read-only filesystem at `/usr`. After an update, Flatcar Container Linux will re-configure the GPT priority attribute, instructing the bootloader to boot from the passive (newly updated) partition. Here's an example of the priority flags set on an Amazon EC2 machine:
+
+```shell
+$ sudo cgpt show /dev/xvda
+ start size part contents
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for coreos-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=1 tries=0 successful=1
+```
+
+Flatcar Container Linux images ship with the `USR-B` partition empty to reduce the image filesize. The first Flatcar Container Linux update will populate it and start the normal active/passive scheme.
+
+The OEM partition is mounted at `/usr/share/oem`.
+
+### Stateful root
+
+All stateful data, including container images, is stored within the read/write filesystem mounted at `/`. On first boot, the ROOT partition and filesystem will expand to fill any remaining free space at the end of the drive.
+
+The data stored on the root partition isn't manipulated by the update process. In return, we do our best to prevent you from modifying the data in /usr.
+
+Due to the unique disk layout of Flatcar Container Linux, `umount -l /etc && rm -rf --one-file-system --no-preserve-root /` is an unsupported but valid operation to purge any OS data. On the next boot, the machine should just start from a clean state, but note that you should rather use the `flatcar-reset` tool for a proper reset, which also gives control of what data to keep.
+
+[provisioning]: ../../provisioning
+[boot process]: ../../provisioning/ignition/boot-process
diff --git a/content/docs/latest/reference/developer-guides/sdk-modifying-flatcar.md b/content/docs/latest/reference/developer-guides/sdk-modifying-flatcar.md
new file mode 100644
index 00000000..1230a14e
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/sdk-modifying-flatcar.md
@@ -0,0 +1,678 @@
+---
+title: Guide to building custom Flatcar images from source
+weight: 10
+aliases:
+ - ../../os/sdk-modifying-flatcar
+ - ../../os/sdk-modifying-coreos
+---
+
+The guides in this document aim to enable engineers to update, and to extend, packages in both the Flatcar OS image as well as the SDK, to suit their own needs.
+Overarching goal of this collection of how-tos is to help you to scratch your own itch, to set you up to play with Flatcar.
+We’ll cover everything you need to make the changes you want, and to produce an image for the runtime environment(s) you want to use Flatcar in (e.g. AWS, qemu, Packet, etc).
+By the end of the guide you will build a developer image that you can run under qemu and have tools for making changes to the OS image like adding or removing packages, or shipping custom kernels.
+Note that we chose this guide's "qemu" image target solely to enable local testing; the same process can be used to produce images for any and all targets (cloud providers etc.) supported by Flatcar.
+
+**Note** there is a "tl;dr" paragraph at the start of each section which summarises the commands discussed in the section.
+
+Flatcar Container Linux is an open source project. All of the source for Flatcar Container Linux is available on [github][github-flatcar]. If you find issues with these docs or the code please send a pull request.
+
+Please direct questions and suggestions to the [#flatcar:matrix.org Matrix channel][matrix] or [mailing list][flatcar-dev].
+
+Some resources provided by the community and the Flatcar's maintainers are also available as an introduction to the Flatcar modification process:
+
+* https://www.youtube.com/watch?v=X4m_JEtlVok
+* https://fosdem.org/2022/schedule/event/modding_the_immutable_how_to_extend_flatcar/
+
+Please note these resources might be outdated and only this page reflects the most up-to-date documentation.
+
+## Getting started
+
+
+
+**tl;dr** Check out a release branch and start the SDK (this uses the current Alpha release branch).
+```shell
+$ git clone https://github.com/flatcar/scripts.git
+$ cd scripts
+$ branch="$(git branch -r -l | awk -F'/' '/origin\/flatcar-[0-9]+$/ {print $2}' | sort | tail -n1)"
+$ git checkout "$branch"
+$ ./run_sdk_container -t
+```
+
+
+
+Flatcar Container Linux uses a containerised SDK; pre-built container images are available via [ghcr.io][ghcr-sdk].
+The SDK itself is containerised, but it requires version information and package build instructions to build an OS image.
+Version information and build instructions for all packages (`ebuilds`) are contained in the scripts repository:
+
+```
+scripts
+ +--sdk_container
+ +---------src
+ | +--third_party
+ | +------coreos-overlay
+ | +------portage-stable
+ `---------.repo
+ +----manifests
+ +-------- version.txt
+```
+
+There are 2 ways to use the SDK container:
+1. Standalone: Run the container and clone the scripts repo inside the container.
+ This is great for one-shot SDK usage; it's not optimal for sustained OS development since versioning is unclear and changes might get lost.
+2. Wrapped: Uses a wrapper script to run the container and to bind-mount the local scripts directory into the container.
+ **This is the recommended way of using the SDK.**
+
+**NOTE** The SDK container supports being run on docker or podman, with docker taking preference when both are available.
+The wrapper scripts will auto-detect which one is available, and use it.
+
+### Clone the scripts repo
+
+The [scripts repository][scripts] - among other things - contains SDK wrapper scripts, a `version.txt` with release version information, and both the [coreos-overlay][coreos] and [portage-stable][portage] ebuild "repositories" in subdirectories (more on ebuilds later).
+
+The name "scripts" has historic reasons - a better way to think of the scripts repo is it being Flatcar's "SDK repo".
+
+```shell
+$ git clone https://github.com/flatcar/scripts.git
+$ cd scripts
+```
+
+#### Optionally, pick a release tag or branch
+
+Cloning the repo will have it land on the `main` branch, which can be thought of as "alpha-next" - i.e. the next major Alpha release.
+Even though main is smoke-tested in nightly builds, it might occasionally be broken in subtle ways.
+This can make it harder to track down issues introduced by actual changes to Flatcar.
+
+* Release **tags** signify specific (past) releases, like "stable-2905.2.4" or "beta-3033.1.1". Tags are created in release branches.
+* Release **branches** only use major numbers and might contain, on top of the latest release tag, changes for the next upcoming release.
+ Branches follow the pattern "flatcar-[MAJOR]".
+ Following the tag example above, "flatcar-2905" would contain all changes of major release version 2095 up until stable-2905.2.4, and might contain changes on top of 2905.2.4 slated for a future 2905.2.5 release.
+
+
+It is generally recommended to base work on the latest Alpha release.
+While new features should target `main` at merge time, Alpha is a tested release and therefore offers a more stable foundation to base work on.
+At the same time, Alpha is not too far away from `main` so the risk of merge-time conflicts should be low.
+
+Find the latest Alpha release branch:
+
+```shell
+$ git branch -r -l | awk -F'/' '/origin\/flatcar-[0-9]+$/ {print $2}' | sort | tail -n1
+```
+
+If the goal is to reproduce and to fix a bug of a release other than Alpha, it is recommended to base the work on the latest point release of the respective major version instead of Alpha. All currrently "active" major versions can be found at the top of the [releases][flatcar-releases] web page.
+
+For quick reference, to get the latest stable release tag, use:
+```shell
+$ git tag -l | grep -E 'stable-[0-9.]+$' | sort | tail -n 1
+```
+(replace `stable` with `beta` or `alpha` in accordance with your needs).
+
+
+```shell
+$ git checkout [branch-or-tag-from-above]
+```
+
+Lastly, to verify the version in use, consult the version file.
+This file is updated on each release and reflects the SDK and OS versions corresponding to the the current commit.
+
+```shell
+$ cat sdk_container/.repo/manifests/version.txt
+FLATCAR_VERSION=3066.1.0
+FLATCAR_VERSION_ID=3066.1.0
+FLATCAR_BUILD_ID=""
+FLATCAR_SDK_VERSION=3066.1.0
+```
+
+The example above is from the release / maintenance branch of the 3066 major release at the time of writing (3066 was in the Beta channel at that time).
+
+**NOTE** that the version file at `sdk_container/.repo/manifests/version.txt` will be updated by `run_sdk_container` to include the git shortlog hash.
+This file is under revision control because it pins the latest official OS release and SDK version of the branch you're working on.
+**If you like to switch branches later, make sure to run `git checkout sdk_container/.repo/manifests/version.txt` to revert the change made by `run_sdk_container`.**
+
+### Start the SDK
+
+We are now set to run the SDK container.
+This will download the container image of the respective version if not present locally, and then start the container with the local directory bind-mounted.
+
+```shell
+$ ./run_sdk_container -t
+sdk@flatcar-sdk-all-3066_0_0_os-beta-3066_1_0-gcf4ff44a ~/trunk/src/scripts $ cat sdk_container/.repo/manifests/version.txt
+```
+
+The `-t` flag is used to tell docker to allocate a TTY. It should be omitted when calling `run_sdk_container` from a script.
+
+The container uses the "sdk" user (user and group ID are updated on container entry to match the host user's UID and GID).
+After entering you're put right into the (host) script repository's bind mount root.
+By default, the name of the container contains SDK and OS image version.
+If there are changes on top of the latest release (either your own, or upstream changes slated for the next patch release), the version file will have been updated:
+
+```shell
+sdk@flatcar-sdk-all-3066_0_0_os-beta-3066_1_0-gcf4ff44a ~/trunk/src/scripts $ cat sdk_container/.repo/manifests/version.txt
+FLATCAR_VERSION=3066.1.0+5-gcf4ff44a
+FLATCAR_VERSION_ID=3066.0.1
+FLATCAR_BUILD_ID="5-gcf4ff44a"
+FLATCAR_SDK_VERSION=3066.0.0
+```
+
+We're basing our work on release 3066.1.0 in this example, the current branch has 5 patches on top of that release, and the latest patch has the shortlog hash `cf4ff44a`.
+This leads to `FLATCAR_BUILD_ID` being set (to the output of `git describe --tags`) and is reflected in the container name `...os-beta-3066_1_0-5-gcf4ff44a`.
+
+
+#### A note on persistence
+
+`run_sdk_container` re-uses containers once started; containers to be re-used are identified by name (see above).
+Persistence helps with keeping changes in your work environment across container runs.
+**Keep in mind though that a new container will be created if the working commit in the scripts repository changes**.
+This is usually desired to prevent version muddling.
+It can be explicitly overridden by using the `-n ` argument to `run_sdk_container`.
+
+
+## Building an OS image
+
+
+
+**tl;dr** Build packages, base image, and vendor (qemu launchable) image.
+This builds for the default architecture, `amd64-usr`.
+Use `--board=arm64-usr` with packages / image script to build for ARM64.
+```shell
+sdk@flatcar-sdk $ ./build_packages
+sdk@flatcar-sdk $ ./build_image
+sdk@flatcar-sdk $ ./image_to_vm.sh
+```
+
+
+
+Before we discuss any modifications to the image, we'll do a full image build first. This will create a "known-good" base to mount your changes on.
+
+### Select the target architecture
+
+**NOTE on cross-compilation**: if you are cross-compiling make sure a static aarch64 qemu is set up via binfmt-misc on your host machine.
+Some packages compile and execute intermediate commands during their build process - this can break cross-compiling since the commands are built for the target architecture.
+The qemu binary on the host needs to be a static binary since it will be called from within the container context.
+Check if your distro has a `qemu-user-static` package that you can install or whether it has support for aarch64 in `binfmt-misc` already; on e.g. Fedora there's an `qemu-aarch64` entry in `/proc` for that (the name of the proc file may vary across distributions though):
+```shell
+$ cat /proc/sys/fs/binfmt_misc/qemu-aarch64
+enabled
+interpreter /usr/bin/qemu-aarch64-static
+flags: F
+offset 0
+magic 7f454c460201010000000000000000000200b700
+mask ffffffffffffff00fffffffffffffffffeffffff
+```
+Note the [**F flag**](https://www.kernel.org/doc/html/latest/admin-guide/binfmt-misc.html) to tell the kernel to preload ("fix") the binary instead of loading it lazily when emulation is required (since the latter leads to issues in namespaced environments).
+
+Should emulation via `binfmt-misc` *not* be set up it can be added e.g. via the host's `systemd-binfmt` service like this:
+```shell
+$ cat /usr/lib/binfmt.d/qemu-aarch64-static.conf
+:qemu-aarch64:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-aarch64-static:F
+$ sudo systemctl restart systemd-binfmt.service
+```
+
+You can run `docker run --rm -ti docker.io/arm64v8/alpine` or `docker run --rm -ti --arch arm64 docker.io/alpine` on your host system as an easy check to verify everything is ready.
+
+At the time of writing the SDK supports two target architectures: AMD64 (x86-64) and ARM64.
+The target architecture can be specified by use of the `--board=` parameter to both `build_packages` and `build_image`:
+* `--board=amd64-usr` will build an x86 image
+* `--board=arm64-usr` will build and ARM64 image
+
+If no architecture is specified then AMD64 will be used by default.
+
+### Build the OS image packages
+
+Beware, it's likely this won't *actually* build the packages but rather download pre-built packages from the Flatcar binary package cache (see below on how to force-rebuild a package that you modified).
+The package cache is updated on every release.
+
+```shell
+$ ./build_packages [--board=...]
+```
+
+The command should download most packages from our binary cache - speeding up the "build" - since we are basing this on an existing release.
+All packages will be installed to `/build/`.
+
+You can rebuild individual packages by running `emerge--usr PACKAGE`, e.g. `emerge-amd64-usr vim`. In this case, no binary cache will be used and the package will always be rebuilt.
+
+In the case of changing the initramfs (mainly for Dracut and Ignition related work), you first need to rebuild the bootengine package and then the kernel package. To make sure that no old initramfs is reused, you can first delete the file `/build/-usr/usr/share/bootengine/bootengine.cpio` and then rebuild the bootengine and kernel package.
+
+### Create the Flatcar Container Linux OS image
+
+Now that we have all packages for the OS image either built or downloaded from the binary cache, we'll build a production base image:
+
+```shell
+$ ./build_image [--board=...]
+```
+
+This will create a temporary directory into which all of the binary packages built above will be installed. Then, a generic full [disk image](sdk-disk-partitions) is created from that temp directory.
+After `build_image` completes, it prints commands for converting the raw bin into a bootable virtual machine, by means of the `image_to_vm.sh` command.
+
+To create a qemu image for local testing, run
+```shell
+$ ./image_to_vm.sh --from=../build/images/arm64-usr/developer-latest [--board=...]
+```
+
+For other vendor images, pass the `--format=` parameter (see `./image_to_vm.sh --help`).
+In general, `image_to_vm.sh` will read the generic disk image, install any vendor specific tools to the OEM partition where applicable (e.g. Azure VM tools for the Azure VM), and produce a vendor specific image. In the case of QEMU, a qcow2 image is produced. QEMU does not require vendor specific tooling in the OEM partition.
+
+On the host outside the container, the image(s) built are located in `__build__/images/…`.
+This directory is also bind-mounted into the container by `run_sdk_container`.
+
+### Booting
+
+`image_to_vm.sh` will also generate a wrapper script to launch a Flatcar VM with qemu. In a new terminal, without entering the SDK, you can boot the VM with:
+```shell
+$ src/build/images/arm64-usr/developer-latest/flatcar_production_qemu.sh
+```
+
+After the VM is running you should be able to SSH into Flatcar (using port 2222):
+```shell
+$ ssh core@localhost -p 2222
+```
+
+You should be able to log in either with your SSH public key (i.e. automatically).
+
+If you encounter errors with KVM, verify that virtualization is supported by your CPU by running `egrep '(vmx|svm)' /proc/cpuinfo`. The `/dev/kvm` directory will be in your host OS when virtualization is enabled in the BIOS.
+
+#### Boot Options
+
+After `image_to_vm.sh` completes, run `./flatcar_production_qemu.sh -- -display curses` to launch a graphical interface to log in to the Flatcar Container Linux VM.
+
+You could instead use the `-nographic` option, `./flatcar_production_qemu.sh -nographic`, which gives you the ability to switch from the VM to the QEMU monitor console by pressing CTRL +a and then c . To close the Flatcar Container Linux Guest OS VM, run `sudo systemctl poweroff` inside the VM.
+
+You can log in via SSH keys or with a different ssh port by running this example `./flatcar_production_qemu.sh -a ~/.ssh/authorized_keys -p 2223 -- -display curses`. Refer to the [Booting with QEMU][booting-qemu] guide for more information on this usage.
+
+## Making changes
+
+Now for the interesting part! We are going to discuss 2 ways of making changes: adding or upgrading a package, and modifying the kernel configuration.
+
+### A brief introduction to Gentoo and how it relates to the SDK
+
+Flatcar Container Linux is based on ChromiumOS, which is based on Gentoo.
+While the ChromiumOS heritage has faded and is barely visible nowadays, we heavily leverage Gentoo processes and tools.
+
+Contrary to traditional Linux distributions, Gentoo applications and “packages” are compiled at installation time.
+Gentoo itself does not ship packages - instead, it consists of a massive number of ebuild files to build applications at installation time (that’s an oversimplification as there are binary package caches, but that’s beyond the scope of this document).
+While the Flatcar SDK can be understood as a Gentoo derivative, the OS image is special.
+The OS image is not self-contained, i.e. it cannot install / update packages - it lacks both a compiler to build packages as well as tools to orchestrate builds and install the resulting binaries.
+Instead, OS images are built via the SDK, by building packages in the SDK, then installing the binaries into a chroot environment.
+From the chroot environment, the resulting OS image is generated.
+
+Packages in Gentoo are organised in a flat hierarchy of `//`.
+For instance, Linux kernel related ebuilds are in the group `sys-kernel` (kernel sources, headers, off-tree modules, etc.), while mail clients like thunderbird or mutt are in group `mail-client`.
+Each package directory may contain ebuild files for multiple versions, e.g. `dev-lang/python` contains a host of python versions (used in the SDK).
+Furthermore, each package directory contains a `Manifest` file with cryptographic checksums and file sizes of the package's source tarball(s), and may contain a `files/` directory containing auxiliary files to build / install the package, e.g. patches or config files.
+
+Multiple package sources - in separate directories - can be stacked on top of each other.
+These “overlays” allow custom extensions or even custom sub-trees on top of an existing foundation.
+In these stacks, “upper” level packages override “lower” level ones.
+The Flatcar build system uses a fork of Gentoo upstream’s `portage-stable` at [sdk_container/src/third_party/portage-stable/][portage] as its base.
+Packages in this directory are kept in sync with Gentoo upstream, and are Flatcar's main source of patches (bug fixes and package stabilisations) to upstream Gentoo.
+Flatcar specific tools live in the overlay directory `coreos-overlay` at [sdk_container/src/third_party/coreos-overlay/][coreos].
+
+Packages are built using "ebuild" files.
+These files contain dependencies of a package - both build and runtime - as well as implement callbacks for downloading, patching, building, and installing the package.
+The callbacks in these ebuild files are written in shell.
+The Gentoo package system - portage - will, when building / installing a package, run the respective callbacks in order (e.g. `src_fetch()` for downloading package sources, and `src_compile()` for building).
+Common ebuild functions shared across many packages are implemented via eclasses (in `eclass/`) which can be inherited by package ebuilds.
+
+For more information on Gentoo in general please refer to the [Gentoo devmanual](https://devmanual.gentoo.org/).
+
+### Get to know the SDK chroot
+
+When entering the SDK you are in the `~/trunk/src/scripts` repository which can be seen as the build system.
+It is one of the three repositories that define a Flatcar build:
+1. `scripts` (the directory you're in) contains high-level build scripts to build all packages for an image, to build an image, and to bootstrap an SDK.
+2. `~/trunk/src/third_party/portage-stable` contains ebuild files of all packages close to (or identical to) Gentoo upstream.
+3. `~/trunk/src/third_party/coreos-overlay` contains Flatcar specific packages like ignition and mayday, as well as Gentoo packages which were significantly modified for Flatcar, like the Linux kernel, or systemd.
+
+The SDK chroot you just entered is self-sustained and has all necessary "host" binaries installed to build Flatcar packages.
+Flatcar OS image packages are "cross-compiled" even when host machine equals target machine, e.g. building an x86 image on an x86 host.
+Cross-compiling via Gentoo's "crossdev" environment allows us to install packages in a chroot, which then can be used to build the OS image from.
+The OS image packages therefore have their own root inside the SDK - AMD64 is located at `/build/amd64-usr/` and ARM64 is under `/build/arm64-usr/`.
+
+Both board chroot and SDK use Gentoo's portage to manage its respective packages: `sudo emerge` is used to manage SDK packages, and `emerge-` (`emerge-amd64-usr` or `emerge-arm64-usr`, without sudo) is used to do the same for the OS image roots.
+
+
+## Add (or update) a package
+
+All of the following is done inside the SDK container, i.e. after running
+```shell
+$ ./run_sdk_container.sh -t
+```
+
+
+
+**tl;dr** In the SDK container, introduce a new upstream package from Gentoo.
+```shell
+~/trunk/src/scripts $ git clone --depth 5 https://github.com/gentoo/gentoo.git
+~/trunk/src/scripts $ mkdir -p ../third_party/portage-stable//
+~/trunk/src/scripts $ cp -R gentoo// ../third_party/portage-stable//
+~/trunk/src/scripts $ emerge-amd64-usr --newuse /
+# optional - add missing eclass
+~/trunk/src/scripts $ cp gentoo/eclass/.eclass ../third_party/portage-stable/eclass/
+~/trunk/src/scripts $ emerge-amd64-usr --newuse /
+# optional - unmask package
+~/trunk/src/scripts $ vim ../third_party/coreos-overlay/profiles/coreos/base/package.accept_keywords
+# remove '~' from arm64 and amd64
+~/trunk/src/scripts $ emerge-amd64-usr --newuse /
+# optional - add missing dependencies, see line 2 ff. above
+# add package to OS image RDEPENDS="..."
+~/trunk/src/scripts $ vim ../third_party/coreos-overlay/coreos-base/coreos/coreos-0.0.1.ebuild
+~/trunk/src/scripts $ emerge-amd64-usr coreos-base/coreos
+~/trunk/src/scripts $ ./build_image
+~/trunk/src/scripts $ ./image_to_vm.sh --from=../build/images/amd64-usr/latest --format qemu
+# from outside the container, i.e. on the host:
+scripts $ ../build/images/amd64-usr/latest/flatcar_production_qemu.sh
+# in a different terminal on the host:
+scripts $ ssh core@localhost -p 2222
+# now run new software to verify it works
+core@localhost ~ $ ...
+```
+
+
+
+Let’s add a new package to our custom image.
+We’ll use a package already available in Gentoo upstream, add it to our SDK, chase down dependencies, and add those, too.
+Updating a package follows the same process - but instead of adding whole packages, new versions’ ebuild files are added to existing ones.
+Note that adding a package “from scratch” - i.e. with no ebuild available via upstream is a completely different kind of beast and requires experience with both Gentoo as well as with fixing build and toolchain issues - so we're not going to discuss that here.
+
+To get access to a rich and up-to-date selection of packages, we’ll use the upstream Gentoo ebuilds repository.
+We’ll copy the ebuild file of the package we want to add from upstream gentoo to portage-stable, as well as the package’s dependencies.
+
+Let’s start by checking out the Gentoo upstream ebuilds to some place outside the SDK.
+We’ll only do a shallow clone to limit the amount of data we need to download:
+```shell
+~/trunk/src/scripts $ git clone --depth 5 https://github.com/gentoo/gentoo.git
+```
+
+This gives us ~170 groups with a total of ~20,000 packages to pick from.
+
+Browse the Gentoo packages and find the one you want to add, or - in case of package updates - the newer version's `.ebuild` file of the package you want to update.
+Create the respective group directory in `~/trunk/src/third_party/portage-stable/` if it does not exist.
+Then copy the whole package directory (including all upstream ebuilds and supplemental files, like patches) into the SDK’s `portage-stable/` directory.
+
+In the case of a package update, copy the new version's ebuild file to either `coreos-overlay` or `portage-stable`, depending on where the package to be upgraded resides.
+Then add the newer version’s tarball checksum from the Gentoo package's `Manifest` file to the one in `portage-stable`.
+
+```shell
+~/trunk/src/scripts $ mkdir -p /src/third_party/portage-stable//
+~/trunk/src/scripts $ cp -R // /src/third_party/portage-stable//
+```
+
+The next step will have us add all required dependencies for the new package.
+This usually is not necessary for package upgrades.
+We will try to build the new / upgraded package, chase down all of the dependencies, and likewise copy those to the respective `/src/third_party` folder, too.
+Depending on the gentoo classes inherited by the new package’s ebuild file, we might need to copy .eclass files, too.
+
+So let’s enter the SDK chroot and try to build and install:
+```shell
+~/trunk/src/scripts $ emerge-amd64-usr --newuse /
+```
+
+If you see walls of error output that contain lines like `[XXXXX].eclass could not be found by inherit()` then we need to copy the respective `.eclass` file.
+It means that the ebuild of the package we are trying to add contains in its `inherit` line an eclass which is not present in our SDK’s portage-stable.
+So let's copy the missing eclass:
+```shell
+~/trunk/src/scripts $ cp /eclass/[XXXXX].eclass /src/third_party/portage-stable/eclass/
+```
+and re-run emerge. Repeat with other missing classes until the errors go away.
+
+Lastly, the SDK might lack unmasks if the respective architecture is masked in the upstream ebuild of the package(s) added (i.e. the `KEYWORDS` variable contains `"... ~amd64 ~arm64 ... "`). Gentoo upstream uses these masks to mark a package as experimental. If that’s the case then emerge will fail with an error like
+```shell
+ The following keyword changes are necessary to proceed:
+ [ ... ]
+ # required by =/package> (argument)
+ =/ **
+```
+
+To proceed, add the package name and version, and its masked architectures to the `package.accept_keywords` file inside the `coreos` profile. Which `package.accept_keywords` file should be updated depends on couple factors - whether it is needed for both SDK and OS image or only for SDK or only for OS image, whether it is needed for both AMD64 and ARM64 images, or only for AMD64 or only for ARM64. Please refer to `README.md` in coreos-overlay for a summary about profiles.
+Flatcar follows its own stabilisation process (through the Alpha - Beta - Stable channels); it's perfectly fine to unmask a package upstream considers unstable.
+
+If you want to use optional build flags (USE flags in Gentoo lingo) e.g. for compiling optional library support into the application, add the new package and the respective USE flag(s) to `src/third_party/portage-stable/profiles/base/package.use`.
+
+After the above issues have been addressed and emerge is not reporting errors anymore, we might need to add dependencies of our new package. If `emerge` fails, look for errors like:
+```
+emerge: there are no ebuilds to satisfy "/:=" for /build/amd64-usr/.
+```
+
+For each of those missing dependencies, repeat the process of adding a package described above.
+
+Of course, the missing dependencies can also have missing dependencies on their own.
+Or missing `.eclass` files.
+Or are in need of more keywords / unmasks.
+Worry not, just keep iterating, things will work eventually.
+
+
+### Rebuild the image
+
+After we’ve successfully built and packaged (calling `emerge` without parameters does both) it’s time to create a new OS image to validate whether the new addition works as intended.
+We’ll first generate an image from our workspace (where we built a "stock" image successfully already) to make sure the new addition does not cause file conflicts with other packages, and to be able to validate the new software in a live system.
+
+First, we add the new package to the base image packages list.
+The list of packages for the base image is an ebuild file itself - and the packages list is just a list of dependencies in that ebuild.
+Let’s add the package:
+```shell
+~/trunk/src/scripts $ vim ../third_party/coreos-overlay/coreos-base/coreos/coreos-0.0.1.ebuild
+```
+In Vim, add `/` to list of packages in `RDEPENDS="..."`.
+
+Then, apply the change:
+
+```shell
+~/trunk/src/scripts $ emerge-amd64-usr coreos-base/coreos
+```
+
+Now we’ll rebuild the OS image from the updated list of packages, then run it in qemu.
+This will allow us to validate whether the software added works to our expectations:
+```shell
+~/trunk/src/scripts $ ./build_image
+~/trunk/src/scripts $ ./image_to_vm.sh --from=../build/images/amd64-usr/latest --format qemu
+```
+
+After building the qemu image, we can now start it and SSH into the new OS image's instance.
+Here, we can validate whether our updated / added application works as expected.
+We start the qemu instance *on the host*, i.e. outside the container, so we can better interact with it from the host.
+Then, in a *different terminal*, we ssh into the host:
+```shell
+scripts $ ../build/images/amd64-usr/latest/flatcar_production_qemu.sh
+# switch terminals
+scripts $ ssh core@localhost -p 2222
+core@localhost ~ $ ...
+```
+
+Now try commands from the package you added and make sure they work, or check the presence of files (e.g. new libraries).
+If something is wrong (e.g. config files are missing etc.), go back and e.g. change the application ebuild accordingly, addressing the errors you’ve observed.
+Then `emerge` the application once more to force re-packaging, and rebuild the image and test again.
+
+## Change the kernel configuration / add or remove a kernel module
+
+All of the following is done inside the SDK container, i.e. after running
+```shell
+$ ./run_sdk_container.sh -t
+```
+
+
+
+**tl;dr** In the SDK container, build the kernel package with a custom config, run+test, and persist
+```shell
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-modules/coreos-modules-.ebuild configure
+~/trunk/src/scripts $ cd /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-modules-/work/coreos-modules-/build
+ $ cp .config ~/trunk/src/scripts/kernel-config.orig
+ $ make menuconfig
+ $ cp .config ~/trunk/src/scripts/kernel-config.mine
+ $ cd ~/trunk/src/scripts/
+~/trunk/src/scripts $ rm -f /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-modules-/.compiled
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-modules/coreos-modules-.ebuild package
+~/trunk/src/scripts $ rm -rf /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-kernel-
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-kernel/coreos-kernel-.ebuild package
+~/trunk/src/scripts $ ./build_image --board=amd64-usr
+~/trunk/src/scripts $ ./image_to_vm.sh --from=../build/images/amd64-usr/latest --board=amd64-usr --format qemu
+
+# on the host
+scripts $ ../build/images/amd64-usr/latest/flatcar_production_qemu.sh
+# in a different terminal on the host:
+scripts $ ssh core@localhost -p 2222
+core@localhost ~ $ ...
+
+~/trunk/src/scripts $ diff kernel-config.orig kernel-config.mine > ../third_party/coreos-overlay/sys-kernel/coreos-modules/files/my.diff
+~/trunk/src/scripts $ cd ../third_party/coreos-overlay/sys-kernel/coreos-modules/files/
+~/trunk/src/scripts $ vim -O commonconfig* amd64_defconfig* my.diff
+~/trunk/src/scripts $ rm my.diff
+~/trunk/src/scripts $ emerge-amd64-usr sys-kernel/coreos-modules
+~/trunk/src/scripts $ emerge-amd64-usr sys-kernel/coreos-kernel
+~/trunk/src/scripts $ ./build_image --board=amd64-usr
+~/trunk/src/scripts $ ./image_to_vm.sh --from=../build/images/amd64-usr/latest --board=amd64-usr --format qemu
+
+# on the host
+scripts $ ../build/images/amd64-usr/latest/flatcar_production_qemu.sh
+# in a different terminal on the host:
+scripts $ ssh core@localhost -p 2222
+core@localhost ~ $ ...
+```
+
+
+
+Next, we’ll look into changing the kernel configuration - e.g. for adding a kernel module or a core kernel feature not shipped with stock Flatcar.
+This will give you a deep dive into the low-level bits of Gentoo's build and packaging system.
+To modify the configuration of a package we will run its individual build steps manually - by use of `ebuild` instead of `emerge`.
+This will allow for pausing after downloading the sources, to change the source tree configuration before building and installing.
+
+Our first step is to set you all up with a pre-configured stock Flatcar Linux kernel to base your modifications on.
+The Flatcar Linux kernel build is split over multiple gentoo ebuild files which all reside in [`coreos-overlay/sys-kernel/`](https://github.com/flatcar/scripts/tree/main/sdk_container/src/third_party/coreos-overlay/sys-kernel):
+
+* `coreos-sources/` for pulling the kernel sources from git.kernel.org
+* `coreos-kernel/` for building the main kernel (vmlinuz)
+* `coreos-modules/` for building the modules, and - somewhat counterintuitively - containing all kernel config files. The kernel configuration in `coreos-modules/files/` is split into
+ * a platform independent part - `commonconfig-`
+ * platform dependent configs - `_defconfig-`
+**NOTE** that these configuration snippets do not contain the whole kernel config but only Flatcar specific ones.
+During the build process the config snippets are merged with the kernel's defaults for all the settings not covered by our snippets, via `make oldconfig`.
+
+The first section below will elaborate on developing and testing your modifications via Portage's temporary build directory before we’ll merge into the ebuilds mentioned above.
+This way we’ll arrive at a boot-able, test-able image before merging your changes into the coreos-overlay ebuild file.
+Using Gentoo’s build-temp directories will also allow you to better iterate on your changes if you encounter problems during the build, or when testing your changes in a qemu image.
+
+Only after we’ve tested our changes will we modify the kernel ebuild in `coreos-overlay` to persist the new configuration.
+
+First, we will set up kernel and module sources, and modify those before build. To fetch and to configure the sources and to build a stock kernel, run:
+```shell
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-modules/coreos-modules-.ebuild configure
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-kernel/coreos-kernel-.ebuild compile
+```
+
+`ebuild` is a low-level tool and part of the Portage ecosystem.
+It is used by the higher level `emerge` tool for fetching, building, and installing source packages.
+A single `emerge` call runs `ebuild fetch, unpack, compile, install, merge, package`.
+Using `ebuild` instead of emerge allows us to stop the installation process after the package sources are configured, edit the sources, and then continue with the installation.
+Let’s cd to the configured kernel source tree in Gentoo’s temporary build directory:
+```shell
+~/trunk/src/scripts $ cd /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-kernel-/work/coreos-kernel-/build
+```
+
+Before we introduce our modifications we’ll make a copy of the original config:
+```shell
+ $ cp .config ~/trunk/src/scripts/kernel-config.orig
+```
+
+The kernel’s menuconfig is a nice way to review the configuration as well as to make changes:
+```shell
+ $ make menuconfig
+```
+
+Make your changes, save the new configuration, and copy the resulting `.config` to `scripts/`:
+```shell
+ $ cp .config ~/trunk/src/scripts/kernel-config.mine
+```
+
+Back in `~/trunk/src/scripts/`, rebuild the kernel image:
+```shell
+ $ cd ~/trunk/src/scripts/
+~/trunk/src/scripts $ rm /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-kernel-/.compiled
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-kernel/coreos-kernel-.ebuild compile
+```
+
+The kernel configuration will contain an auto-generated INITRAMFS line.
+This line must not be present in a pristine Flatcar kernel config (i.e. in an original ebuild config); there’s a sanity check in the module ebuild that will cause the module build to fail if that line is present.
+So we’ll remove it:
+```shell
+~/trunk/src/scripts $ sed -i 's/^CONFIG_INITRAMFS_SOURCE=.*//' kernel-config.mine
+```
+
+Then delete the modules build directory - which we only needed above to get to a kernel .config - and fetch it anew, copy the kernel configuration, and rebuild the modules:
+```shell
+~/trunk/src/scripts $ rm -rf /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-modules-
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-modules/coreos-modules-.ebuild unpack
+~/trunk/src/scripts $ cp kernel-config.mine /build/amd64-usr/var/tmp/portage/sys-kernel/coreos-modules-/work/coreos-modules-/build/.config
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-modules/coreos-modules-.ebuild compile
+```
+
+At this point, we have both a kernel build as well as kernel module binaries - but these are in temporary working directories.
+In order to be able to use those for an image build, we need to generate binary packages from what we compiled.
+All binary packages reside in the board chroot at `/build/amd64-usr/var/lib/portage/pkgs/`.
+In the next step, we’ll build `coreos-kernel-.tbz2` and `coreos-modules-.tbz2`, which will land in `/build/amd64-usr/var/lib/portage/pkgs/sys-kernel`.
+
+We package the kernel and kernel modules:
+```shell
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-kernel/coreos-kernel-.ebuild package
+~/trunk/src/scripts $ ebuild-amd64-usr ../third_party/coreos-overlay/sys-kernel/coreos-modules/coreos-modules-.ebuild package
+```
+
+These packages can now be picked up by the image builder script. Let’s build a new image and boot it with qemu - this will allow us to validate the changes we made to the kernel config before persisting:
+```shell
+~/trunk/src/scripts $ ./build_image --board=amd64-usr
+~/trunk/src/scripts $ ./image_to_vm.sh --from=../build/images/amd64-usr/latest --board=amd64-usr --format qemu
+```
+
+*On the host*, start the qemu VM.
+Then, *in a different terminal*, ssh into the VM and validate your modifications.
+```shell
+scripts $ ../build/images/amd64-usr/latest/flatcar_production_qemu.sh
+scripts $ ssh core@localhost -p 2222
+core@localhost ~ $ ...
+```
+
+After we’ve verified that our modifications work as expected, let’s persist the changes into the ebuild file - in `sys-kernel/coreos-modules` (as previously mentioned).
+First, we’ll generate a diff between the original config and our own config.
+Then, we’ll open an editor and manually transfer the settings we actually changed - remember, the config snippets in `coreos-overlay` only contain Flatcar specifics.
+```shell
+~/trunk/src/scripts $ diff kernel-config.orig kernel-config.mine > ../third_party/coreos-overlay/sys-kernel/coreos-modules/files/my.diff
+~/trunk/src/scripts $ cd ../third_party/coreos-overlay/sys-kernel/coreos-modules/files/
+~/trunk/src/third_party/coreos-overlay/sys-kernel/coreos-modules/files/ $ vim -O commonconfig* amd64_defconfig* my.diff
+~/trunk/src/third_party/coreos-overlay/sys-kernel/coreos-modules/files/ $ rm my.diff
+```
+
+Finally, we’ll rebuild kernel and modules using the updated ebuild, to make sure the build works:
+```shell
+~/trunk/src/scripts $ emerge-amd64-usr sys-kernel/coreos-kernel
+~/trunk/src/scripts $ emerge-amd64-usr sys-kernel/coreos-modules
+~/trunk/src/scripts $ ./build_image --board=amd64-usr
+~/trunk/src/scripts $ ./image_to_vm.sh --from=../build/images/amd64-usr/latest --board=amd64-usr --format qemu
+```
+
+*On the host*, start the qemu VM.
+Then, *in a different terminal*, ssh into the VM and validate your modifications.
+```shell
+scripts $ ../build/images/amd64-usr/latest/flatcar_production_qemu.sh
+scripts $ ssh core@localhost -p 2222
+core@localhost ~ $ ...
+```
+
+## Testing images
+
+[Mantle][mantle] is a collection of utilities used in testing and launching SDK images.
+
+## Rebuilding the SDK
+
+Take a look at the [SDK bootstrap process](sdk-bootstrapping) to learn how to build your own SDK.
+
+[flatcar-dev]: https://groups.google.com/forum/#!forum/flatcar-linux-dev
+[github-flatcar]: https://github.com/flatcar
+[matrix]: https://app.element.io/#/room/#flatcar:matrix.org
+[ghcr-sdk]: https://github.com/orgs/flatcar/packages
+[scripts]: https://github.com/flatcar/scripts
+[flatcar-releases]: https://www.flatcar-linux.org/releases/
+
+
+[coreos]: https://github.com/flatcar/scripts/tree/main/sdk_container/src/third_party/coreos-overlay
+[portage]: https://github.com/flatcar/scripts/tree/main/sdk_container/src/third_party/portage-stable
+[mantle]: https://github.com/flatcar/mantle
+[prodimages]: sdk-building-production-images
+[sdktips]: sdk-tips-and-tricks
+[booting-qemu]: ../../installing/vms/qemu/#ssh-keys
diff --git a/content/docs/latest/reference/developer-guides/sdk-tips-and-tricks.md b/content/docs/latest/reference/developer-guides/sdk-tips-and-tricks.md
new file mode 100644
index 00000000..8a88fef3
--- /dev/null
+++ b/content/docs/latest/reference/developer-guides/sdk-tips-and-tricks.md
@@ -0,0 +1,294 @@
+---
+title: Tips and tricks
+weight: 10
+aliases:
+ - ../../os/sdk-tips-and-tricks
+---
+
+## Finding all open pull requests and issues
+
+- [Flatcar Container Linux Issues][issues]
+- [Flatcar Container Linux Pull Requests][pullrequests]
+
+[issues]: https://github.com/issues?user=flatcar-linux
+[pullrequests]: https://github.com/pulls?user=flatcar-linux
+
+## Searching all repo code
+
+Using `repo grep` you can search across all of the Git repos at once:
+
+```shell
+repo grep CONFIG_EXTRA_FIRMWARE
+```
+
+Note: this could take some time.
+
+### Base system dependency graph
+
+Get a view into what the base system will contain and why it will contain those things with the emerge tree view:
+
+```shell
+equery-amd64-usr depgraph --depth 1 coreos-base/coreos-dev
+```
+
+Get a tree view of the SDK dependencies:
+
+```shell
+equery depgraph --depth 1 coreos-base/hard-host-depends coreos-devel/sdk-depends
+```
+
+### Import ebuilds from Gentoo
+
+You can use `scripts/update_ebuilds` to fetch unmodified packages into `src/third_party/portage-stable` and add the files to git. The package argument should be in the format of `category/package-name`, e.g.:
+
+```shell
+~/trunk/src/scripts $ ./update_ebuilds sys-block/open-iscsi
+```
+
+Modified packages must be moved out of `src/third_party/portage-stable` to `src/third_party/coreos-overlay`.
+
+If you know in advance that any files in the upstream package will need to be changed, the package can be fetched from upstream Gentoo directly into `src/third_party/coreos-overlay`. e.g.:
+
+```shell
+~/trunk/src/third_party/coreos-overlay $ mkdir -p sys-block/open-iscsi
+~/trunk/src/third_party/coreos-overlay $ rsync -av rsync://rsync.gentoo.org/gentoo-portage/sys-block/open-iscsi/ sys-block/open-iscsi/
+```
+
+The tailing / prevents rsync from creating the directory for the package so you don't end up with `sys-block/open-iscsi/open-iscsi`. Remember to add any new files to git.
+
+To quickly test your new package(s), use the following commands:
+
+```shell
+~/trunk/src/scripts $ # Manually merge a package in the chroot
+~/trunk/src/scripts $ emerge-amd64-usr packagename
+~/trunk/src/scripts $ # Manually unmerge a package in the chroot
+~/trunk/src/scripts $ emerge-amd64-usr --unmerge packagename
+~/trunk/src/scripts $ # Remove a binary from the cache
+~/trunk/src/scripts $ sudo rm /build/amd64-usr/packages/category/packagename-version.tbz2
+```
+
+To include the new package as a dependency of Flatcar Container Linux, add the package to the end of the `RDEPEND` environment variable in `coreos-base/coreos/coreos-0.0.1.ebuild` then increment the revision of Flatcar Container Linux by renaming the softlink (e.g.):
+
+```shell
+~/trunk/src/third_party/coreos-overly $ git mv coreos-base/coreos/coreos-0.0.1-r237.ebuild coreos-base/coreos/coreos-0.0.1-r238.ebuild
+```
+
+The new package will now be built and installed as part of the normal build flow when you run `build_packages` again.
+
+If tests are successful, commit the changes, push to your GitHub fork and create a pull request.
+
+[CONTRIBUTING]: https://github.com/flatcar/Flatcar#participate-and-contribute
+
+### Packaging references
+
+References:
+
+- Chromium OS [Portage Build FAQ]
+- [Gentoo Development Guide]
+- [Package Manager Specification]
+
+[Portage Build FAQ]: http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq
+[Gentoo Development Guide]: http://devmanual.gentoo.org/
+[Package Manager Specification]: https://wiki.gentoo.org/wiki/Package_Manager_Specification
+
+
+#### Set a password for the core user (when building your own images)
+
+Your SSH keys should be detected and added automatically by the image build process. Optionally, you can set a password for the `core` user which you can use later for ssh authentication, should SSH pubkey authentication not work for you.
+
+After entering the SDK container for the first time (or after re-creating it), you can set user `core`'s password:
+
+```shell
+$ ./set_shared_user_password.sh
+```
+
+This is the password you will use to log into the console of images built with the SDK.
+
+## Caching git https passwords
+
+Turn on the credential helper and git will save your password in memory for some time:
+
+```shell
+git config --global credential.helper cache
+```
+
+Note: You need git 1.7.10 or newer to use the credential helper
+
+Why doesn't Flatcar Container Linux use SSH in the git remotes? Because we can't do anonymous clones from GitHub with an SSH URL. This will be fixed eventually.
+
+## SSH config
+
+You will be booting lots of VMs with on the fly ssh key generation. Add this in your `$HOME/.ssh/config` to stop the annoying fingerprint warnings.
+
+```ini
+Host 127.0.0.1
+ StrictHostKeyChecking no
+ UserKnownHostsFile /dev/null
+ User core
+ LogLevel QUIET
+```
+
+## Hide loop devices from desktop environments
+
+By default desktop environments will diligently display any mounted devices including loop devices used to construct Flatcar Container Linux disk images. If the daemon responsible for this happens to be ``udisks`` then you can disable this behavior with the following udev rule:
+
+```shell
+echo 'SUBSYSTEM=="block", KERNEL=="ram*|loop*", ENV{UDISKS_PRESENTATION_HIDE}="1", ENV{UDISKS_PRESENTATION_NOPOLICY}="1"' > /etc/udev/rules.d/85-hide-loop.rules
+udevadm control --reload
+```
+
+## Leaving developer mode
+
+Some daemons act differently in "dev mode". For example update_engine refuses to auto-update or connect to HTTPS URLs. If you need to test something out of dev_mode on a vm you can do the following:
+
+```shell
+mv /root/.dev_mode{,.old}
+```
+
+If you want to permanently leave you can run the following:
+
+```shell
+crossystem disable_dev_request=1; reboot
+```
+
+## Re-initialise the SDK container
+
+By default, the SDK container is re-used when using the `./run_sdk_container` script; all your changes within the container are preserved.
+To reset the container, list all docker containers:
+```shell
+docker ps --all
+…
+00a133b61c55 ghcr.io/flatcar/flatcar-sdk-all:3087.0.0 "/bin/sh -c /home/sd…" 2 weeks ago Exited (137) 11 days ago flatcar-sdk-all-3087.0.0_os-alpha-3087.0.0-1-g39d915ae
+…
+```
+and identify the SDK / OS image release version you've been working on.
+Then delete the container:
+```shell
+docker container rm 00a133b61c55
+```
+
+The next run of `./run_sdk_container` will initialise a new container.
+
+## Build everything from scratch
+
+If you want to build everything from scratch, but at the same time want to exclude several packages that take much time.
+
+```shell
+emerge-amd64-usr --emptytree -1 -v --tree --exclude="dev-lang/rust sys-devel/gcc" coreos-base/coreos-dev
+```
+
+Or if you want to do the rebuild by running `build_packages`, you should remove the binary package of `coreos` before rebuilding it:
+
+```shell
+emerge-amd64-usr --unmerge coreos-base/coreos
+rm -f /build/amd64-usr/var/lib/portage/pkgs/coreos-base/coreos-0.0.1*.tbz2
+./build_packages
+```
+
+## Modify or update invididual packages
+
+You can modify the package definitions in `third_party/coreos-overlay/`.
+A complete and thorough guide for modifying packages is [here][mod-cl].
+Changes for toolchain packages like the compiler need to be done to the SDK directly; `./setup_board` needs to be called after such changes (and ideally, the SDK should be rebuilt).
+Any changes to the OS image only can be built by running `./build_packages && ./build_image`.
+All build commands can be run multiple times but whether your last changes are picked up depends on whether the package revision
+was increased (by renaming the ebuild file) or the package uninstalled and the binary package removed (See the last commands in
+_Build everything from scratch_ where it was done for the parent package `coreos-base/coreos`).
+Therefore, we recommend to run every build command only once in a fresh SDK to be sure that your most recent modification is used.
+
+For some packages, like the Linux kernel in `coreos-source`, `coreos-kernel`, and `coreos-modules`, it is enough to rename
+the ebuild file and it will download a new kernel version.
+Ebuilds for other packages under `coreos-overlay/` reference a specific commit in `CROS_WORKON_COMMIT` which needs to be changed.
+If files of a package changed their hash sums, use `ebuild packagename.ebuild manifest` to recalculate the hashes for
+the `Manifest` file.
+
+Here is an example of updating an individual package to a newer version:
+
+```shell
+git mv aaa-bbb/package/package-0.0.1-r1.ebuild aaa-bbb/package/package-0.0.1-r2.ebuild
+ebuild aaa-bbb/package/package-0.0.1-r2.ebuild manifest
+emerge-amd64-usr -1 -v aaa-bbb/package
+```
+
+Do not forget about updating its version and revision in `package.accept_keywords` files in the `profiles` directory.
+In some cases such a file can pin an exact version of a specific package, which needs to be updated as well.
+
+## Use binary packages from a shared build store
+
+Some packages like `coreos-modules` take a long time to build. Use:
+
+```shell
+./build_packages --getbinpkgver=$(gsutil cat gs://…/boards/amd64-usr/current-master/version.txt |& sed -n 's/^FLATCAR_VERSION=//p')
+```
+
+to use packages from the another build store.
+
+## Allow /usr to be remounted as read-write
+
+By default, in every Flatcar image, it is not possible to remount `/usr` partition as read-write. However, sometimes it is needed to mount the partition as read-write mainly for debugging purposes. To make such a debugging image, Use
+
+```shell
+./build_image --noenable_rootfs_verification
+```
+
+Then it will create an image without dm-verity being enabled. So after booting with the image, you can simply run:
+
+```shell
+sudo mount -o remount,rw /usr
+```
+
+## Known issues
+
+### build\_packages fails on coreos-base
+
+Sometimes coreos-dev or coreos builds will fail in `build_packages` with a backtrace pointing to `epoll`. This hasn't been tracked down but running `build_packages` again should fix it. The error looks something like this:
+
+```shell
+Packages failed:
+coreos-base/coreos-dev-0.1.0-r63
+coreos-base/coreos-0.0.1-r187
+```
+
+### Newly added package fails checking for kernel sources
+
+It may be necessary to comment out kernel source checks from the ebuild if the build fails, as Flatcar Container Linux does not yet provide visibility of the configured kernel source at build time. Usually this is not a problem, but may lead to warning messages.
+
+### `coreos-kernel` fails to link after previously aborting a build
+
+Emerging `coreos-kernel` (either manually or through `build_packages`) may fail with the error:
+
+```shell
+/usr/lib/gcc/x86_64-pc-linux-gnu/4.9.4/../../../../x86_64-pc-linux-gnu/bin/ld: scripts/kconfig/conf.o: relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC scripts/kconfig/conf.o: error adding symbols: Bad value
+```
+
+This indicates the ccache is corrupt. To clear the ccache, run:
+
+```shell
+CCACHE_DIR=/var/tmp/ccache ccache -C
+```
+
+To avoid corrupting the ccache, do not abort builds.
+
+### `build_image` hangs while emerging packages after previously aborting a build
+
+Delete all `*.portage_lockfile`s in `/build//`. To avoid stale lockfiles, do not abort builds.
+
+## Constants and IDs
+
+### Flatcar Container Linux app ID
+
+This UUID is used to identify Flatcar Container Linux to the update service and elsewhere:
+
+```uuid
+e96281a6-d1af-4bde-9a0a-97b76e56dc57
+```
+
+### GPT UUID types
+
+- Flatcar Container Linux Root: 5dfbf5f4-2848-4bac-aa5e-0d9a20b745a6
+- Flatcar Container Linux Reserved: c95dc21a-df0e-4340-8d7b-26cbfa9a03e0
+- Flatcar Container Linux Raid Containing Root: be9067b9-ea49-4f15-b4f6-f36f8c9e1818
+
+
+
+[mod-cl]: sdk-modifying-flatcar
diff --git a/content/docs/latest/reference/integrations.md b/content/docs/latest/reference/integrations.md
new file mode 100644
index 00000000..3d2659b3
--- /dev/null
+++ b/content/docs/latest/reference/integrations.md
@@ -0,0 +1,17 @@
+---
+title: Integrations
+weight: 10
+aliases:
+ - ../os/integrations
+---
+
+This document tracks projects that integrate with Flatcar Container Linux. Please help us keep the list current by letting us know of other projects that use Flatcar Container Linux.
+
+## Projects
+
+- [Deis Workflow](https://deis.com/workflow/): an open source PaaS for Kubernetes that runs on Flatcar Container Linux.
+- [Amazon Web Services](https://aws.amazon.com/marketplace/pp/B01H62FDJM): Amazon's cloud computing solution. Offers Flatcar Container Linux.
+- [Google Cloud Platform](https://cloud.google.com/compute/docs/images#os-compute-support): Google's cloud computing solution. Offers Flatcar Container Linux.
+- [Microsoft Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/compute?subcategories=operating-systems&page=1#): Microsoft's cloud computing solution. Offers Flatcar Container Linux.
+- [DigitalOcean](https://www.digitalocean.com/products/linux-distribution/coreos/): An independent cloud computing solution. Offers Flatcar Container Linux.
+- [Equinix Metal](https://metal.equinix.com/): A hosted bare metal solution. Offers Flatcar Container Linux.
diff --git a/content/docs/latest/reference/supply-chain.md b/content/docs/latest/reference/supply-chain.md
new file mode 100644
index 00000000..7cad2383
--- /dev/null
+++ b/content/docs/latest/reference/supply-chain.md
@@ -0,0 +1,232 @@
+---
+title: Supply chain security mechanisms
+weight: 10
+---
+
+
+### Flatcar Container Linux Supply Chain Security and SLSA
+
+The [Supply Chain Levels for Software Artifacts](https://slsa.dev/) (SLSA or 'salsa' for short) industry standard defines a checklist of standards and controls to prevent tampering, improve integrity, and secure packages and infrastructure in software projects.
+This document describes the Flatcar Container Linux project's current and planned compliance with the [requirements of SLSA](https://slsa.dev/spec/v0.1/requirements) and provides a deep dive into the processes and mechanisms to secure the Flatcar project supply chain.
+
+Our assessment is that Flatcar complies with SLSA Level 3. We are working to address the few remaining requirements for SLSA Level 4.
+
+#### SLSA Threat model and requirements
+
+![supply_chain_threats](../img/supply-chain-threats-slsa.png)
+
+SLSA defines a number of [key threats](https://slsa.dev/spec/v0.1/#supply-chain-threats) against supply chains:
+1. unauthorised changes to sources
+2. compromised source repositories
+3. builds from a modified source
+4. a compromised build process
+5. use of a compromised dependency
+6. publishing of a compromised package or image
+7. a compromised package or image repository
+8. injection / use of a compromised package or image
+
+To counter these threats, SLSA defines [requirements](https://slsa.dev/spec/v0.1/requirements) for sources, builds, and provenance, as well as common (overall) requirements.
+
+##### Table of SLSA requirements and conformance levels, and Flatcar's compliance
+
+This following table summarizes the requirements of each SLSA level, and Flatcar's current state of compliance.
+
+| SLSA requirement | SLSA level 1 | SLSA level 2 | SLSA level 3 | SLSA level 4 | Flatcar meets |
+|---------------------------------------------------|--------------|--------------|--------------|--------------|---------------|
+| Source integrity: Source is version controlled | | ✓ | ✓ | ✓ | ✓ |
+| Source integrity: Source has verified history | | | ✓ | ✓ | ✓ |
+| Source integrity: Source is retained indefinitely | | | 18 months | ✓ | ✓ |
+| Source integrity: Source is two-person reviewed | | | | ✓ | ✓ |
+| Build integrity: Scripted build | ✓ | ✓ | ✓ | ✓ | ✓ |
+| Build integrity: Build service is used | | ✓ | ✓ | ✓ | ✓ |
+| Build integrity: Build as code | | | ✓ | ✓ | ✓ |
+| Build integrity: Built in ephemeral environment | | | ✓ | ✓ | ✓ |
+| Build integrity: Isolated | | | ✓ | ✓ | ✓ |
+| Build integrity: Parameterless | | | | ✓ | ✓ |
+| Build integrity: Hermetic | | | | ✓ | – [1] |
+| Build integrity: Reproducible | | | | Best Effort | ✓ [2] |
+| Provenance: Available | ✓ | ✓ | ✓ | ✓ | ✓ |
+| Provenance: Authenticated | | ✓ | ✓ | ✓ | ✓ |
+| Provenance: Service generated | | ✓ | ✓ | ✓ | ✓ |
+| Provenance: Non-falsifiable | | | ✓ | ✓ | ✓ |
+| Provenance: Dependencies complete | | | | ✓ | ✓ |
+| Common - Security | | | | ✓ | – [3] |
+| Common - Access | | | | ✓ | ✓ |
+| Common - Superusers | | | | ✓ | – [4] |
+
+
+**Notes**
+
+1. Build integrity - Hermetic builds: While Flatcar includes the potential for hermetic builds today - all sources are known in advance and can be staged to a build machine isolated from the network - the current build infrastructure and automation does not implement this feature.
+ A [tracking issue](https://github.com/flatcar/Flatcar/issues/833) exists to address this in the future.
+2. Build integrity - Reproducible: Many software packages such as compilers and core libraries insert build-variable information such as timestamps, user IDs, and host names into their binaries during the build process.
+ While Flatcar's builds are 100% reproducible, the output may differ in a bit-by-bit comparison ONLY in places where this volatile information is compiled into the binaries.
+3. Common - Security: This SLSA requirement is marked TBD in the SLSA standard and is not well defined at the time of writing; the essence appears to gravitate around a verifiable tamper-proof build infrastructure, e.g. via a full chain of trust.
+ Flatcar is built on Flatcar to benefit from all the security features the distribution already ships with (discussed in detail below) - immutable OS binaries, boot time integrity check, etc.
+ However, Flatcar currently does not support setting up a full chain of trust via TPM. A [roadmap item](https://github.com/flatcar/Flatcar/issues/630) aims to add TPM support to Flatcar, and have the build infrastructure support a full chain of trust.
+4. Common - Superusers: The number of users with direct access to build infrastructure is very small, and users are well trusted.
+ However, changes to the build system do not enforce approval by a second administrator.
+
+### Deep dive: Implementation in Flatcar Container Linux
+
+Flatcar Container Linux employs a number of concrete mechanisms and processes to secure its supply chain.
+
+Broadly speaking, these break down into two areas:
+1. Mechanisms and processes to ensure validity of the Flatcar artifacts that make up release images.
+ Attestation is performed either automatically by the build system or by the maintainers team.
+ This includes validating the build pipeline's inputs / upstreams, securing the build process, and ensuring attestability of the resultant images and update payloads.
+2. Mechanisms and processes to be applied at provisioning time by users, as well as automatically at runtime, attesting validity of the artifacts in use.
+ Attestation is performed either automatically by the provisioning logic of Flatcar OS / client services or by users.
+ This includes validating signatures of Flatcar images by end users (or their provisioning automation), validating update payloads by the Flatcar update client, and verifying integrity during the boot process.
+
+
+#### Foundation
+
+Flatcar builds its supply chain security on a number of basic concepts which, in summary, provide the foundation for securing the entire Flatcar supply chain.
+These foundational concepts fall into one of the two areas outlined above.
+
+The build-time / release-time foundational concepts are:
+1. We always build from source.
+ All our artifacts are built from source; no pre-generated binaries are used.
+ Builds are performed by a validated SDK which is the result of a previous, validated, build.
+2. We ship whole OS images only; no incremental updates or upgrades of individual OS binaries or packages are supported.
+ Installation images are shipped as full disk images, including partitioning.
+ Updates are shipped as full partition images; an A/B OS partition scheme is employed for installing and for activating the update.
+
+The provisioning-time / OS upgrade / runtime foundational concepts are:
+1. Installation images contain a full, pre-partitioned disk including the full OS.
+ No additional OS binaries or packages are installed during or after provisioning.
+ (The root partition, which does not contain OS artifacts, is resized to span the whole disk at provisioning time for user convenience.)
+2. All OS binaries and libraries reside on a separate _read-only_ partition (mounted to `/usr` at runtime).
+ The partition cannot be written to.
+
+
+#### Flatcar supply chain security mechanisms
+
+Flatcar builds its supply chain security on the foundational concepts outlined above.
+In this section we will discuss the overall Flatcar build and release process as well as user-side provisioning, update, and operation - with a special focus on read models and supply chain security.
+
+##### OS Image build and release of a new OS version
+
+Flatcar builds are reproducible; the software configuration state of any given release (or even nightly build) is recorded in git repositories and can be reproduced by a simple git clone + checkout + rebuild.
+We employ a number of mechanisms to make this process tamper-proof and to make artifacts we produce attestable.
+Please note that while builds are reproducible and will create the same binary code, the output may differ in a bit-by-bit comparison in places where volatile information like timestamps, hostnames, or user IDs are inserted at build time.
+
+Flatcar release images and related artifacts are automatically signed at build time (on the secure build infrastructure) with a 4096 bit GPG RSA key.
+Access to the image signing key is restricted to core maintainers.
+The image signing key is always stored encrypted and has a lifetime of one year.
+Renewing the image signing key requires split secrets of multiple maintainers.
+
+![supply_chain_build](../img/supply-chain-build.png)
+
+###### Inputs
+1. Flatcar's build automation and package definition repositories.
+ Write access to repositories is limited to trusted group of core Flatcar maintainers (the @flatcar/flatcar-maintainers team in the flatcar github org).
+ All changes are reviewed by at least one maintainer before merge.
+ 1. A [top-level build automation repo](https://github.com/flatcar/scripts).
+ This repository qualifies automation and package definitions of any given build by commit ID.
+ It includes all package definitions (ebuilds) in subdirectories.
+ Package definitions include the [portage-stable](https://github.com/flatcar/scripts/tree/main/sdk_container/src/third_party/portage-stable) and [coreos-overlay](https://github.com/flatcar/scripts/tree/main/sdk_container/src/third_party/coreos-overlay) ebuild "repositories".
+2. Upstream source tarballs of applications and libraries shipped with Flatcar.
+ Secured by cryptographic checksums stored in Flatcar's build automation repos (Gentoo standard).
+3. The SDK container.
+ The container is the result of a previous build and is validated by its container registry checksum.
+
+###### Process
+
+The OS image (and optionally, SDK) build and artifact signing is performed on a dedicated machine (not a VM) in a secure, access-controlled Equinix Metal data center.
+Access to the infrastructure is limited to a small number of core maintainers - a subset of the Flatcar maintainers team - and is reviewed regularly.
+Access is only possible via a VPN (not via public internet) and is verified with SSH keys.
+The build process entails:
+
+1. Cloning of Flatcar build automation (git repo) and package definitions / configurations.
+2. Fetching of source tarballs of apps and libraries that make up the OS image.
+ Integrity of source tarballs is validated against multiple cryptographic checksums stored in package definition (ebuild) repos.
+3. Building of apps and libraries, and generation of installation images and update image.
+ **During the build of each package, per-package SLSA provenance for most OS image packages is generated.**
+ Optionally, a new SDK is built prior to the OS apps and libs.
+ Full SDK rebuilds are usually done only for new major Alpha releases.
+ **During the SDK build, per-package SLSA provenance for core libraries and toolchains is generated.**
+ In rare circumstances, changes to toolchains and/or core libraries would mandate an SDK rebuild.
+ In that case a new SDK is published alongside the respective Flatcar Beta / Stable release.
+ **Note** that we track a number of feature requests to further improve SLSA provenance generation:
+ 1. Add builder ID information during CI builds: [tracking issue](https://github.com/flatcar/Flatcar/issues/813)
+ 2. Generate additional provenance for the whole image: [tracking issue](https://github.com/flatcar/Flatcar/issues/814)
+4. Signing of artifacts to enable validation of authenticity at provisioning time.
+ Signing also ensures SLSA provenance is non-falisfiable.
+ 1. A verity hash of the OS partition is generated and injected into the initrd so Flatcar can verify tamper-free OS partition at boot time.
+ 2. There is an extra layer of security for the update image.
+ Many Flatcar deployments use automated updates so special care is taken to ensure these are not compromised.
+ A core maintainer downloads the update image from the secure build server, and validates the image and its server signature.
+ The image is then signed with a key stored on a hardware security module (HSM), in an air-gapped environment so the key is never exposed to the internet.
+
+###### Outputs
+
+After image builds conclude, OS images, update image, related artifacts and signature files reside on the secure build infrastructure and are ready for publishing.
+Access to the public image and update servers is limited to a subset of the Flatcar maintainers team.
+Accounts with access to the update server use 2-factor authentication.
+
+1. Artifacts and signatures are uploaded from the secure build infrastructure to the public image server.
+ The SDK container (if applicable) is pushed to the container registry (Flatcar uses GHCR at the time of writing).
+2. The update image and its (manually generated) signature are uploaded to the update server by the person who performed the manual signing step.
+3. Per-package SLSA provenance is shipped within the image at `/usr/share/SLSA/`.
+
+##### Provisioning-time / OS upgrade / run-time
+
+Flatcar ships with a number of mechanisms to attest the authenticity of artifacts consumed both when provisioning and when updating Flatcar.
+Further, Flatcar assesses the authenticity of all OS binaries - which reside on a separate, read-only partition - at each boot.
+
+![supply_chain_provision](../img/supply-chain-provision-runtime.png)
+
+###### Validation at provisioning time
+
+The public key component of the Flatcar image signing key (see above) is [available from the Flatcar website](https://www.flatcar.org/security/image-signing-key/) for verification.
+Using the public key, installation images can be validated against their signatures before provisioning, either manually by the user or (preferred) automatically by provisioning automation.
+In either case the Flatcar project provides the means for validation, but executing the process is ultimately in the responsibility of the operator / user.
+In other words, while strongly recommended, validation is not enforced by the distribution, i.e. there are no mechanisms in place which would prevent installation of an image that was not validated.
+Installation automation provided by the Flatcar project (e.g. the [flatcar-install](https://github.com/flatcar/init/blob/flatcar-master/bin/flatcar-install) script) will verify authenticity of installation images.
+
+
+Each of the Flatcar installation images (for all supported vendors / platforms) are accompanied by
+1. a `.DIGESTS` file which contains cryptographic hashes of the respective image
+2. a `.DIGESTS.sig` file containing the signature and
+3. a `.DIGESTS.asc` containing both cryptographic hashes as well as the ASCII-armored signature for convenience
+
+Smaller artifacts, like text files containing the list of packages or the list of files contained in the OS image, do not ship with cryptographic hashes but are accompanied by `.sig` digital signature files directly.
+
+###### Validation of OS partition at boot time
+
+All operating system binaries are contained in a separate, immutable (read-only) partition which is mounted to `/usr` at system boot.
+No OS binaries exist outside `/usr` and no individual files can be changed.
+The OS partition is validated on each boot using `dm-verity`. The verity hash is baked into the init-rd at build time.
+
+###### Validation of update images at OS upgrade time
+
+Flatcar ships with a mechanism to auto-upgrade itself to new releases.
+The client service, `update_engine`, is included in the OS partition of the Flatcar image, i.e. it is on a read-only, `dm-verity`-validated partition.
+Before installation, update images are validated against the update signing key - this key has a separate, even stronger security process than the image signing key (see build process for details).
+`update_engine` uses a baked-in public key for validation.
+An update is installed only after successful validation.
+
+### Future improvements
+
+To further enhance attestability and supply chain security we consider the below (non-exhaustive) list of improvements for Flatcar in the future.
+
+#### SLSA provenance
+
+1. Add builder ID information during CI builds: [tracking issue](https://github.com/flatcar/Flatcar/issues/813)
+2. Generate additional provenance for the whole image: [tracking issue](https://github.com/flatcar/Flatcar/issues/814)
+
+#### Build time
+
+1. Make release builds hermetic by providing all required assets beforehand and isolating the build machine from the network during build, to address the "Build integrity - Hermetic" requirement
+ ([tracking issue](https://github.com/flatcar/Flatcar/issues/833)).
+2. Establish a secure boot chain using TPM support when it becomes available (see "Provisioning-time" item 2. below).
+3. Remove login (local and remote) from build infrastructure and automate all build infra properties (infra-as-code).
+ Require approval from 2 administrators for every change.
+ This will address the "Security - Superusers" requirement.
+
+#### Provisioning-time / OS upgrade / run-time
+
+1. Integrate with hardware TPM (where available) to secure the boot process right from hardware start-up instead of just from the initial ramdisk
+ [roadmap issue](https://github.com/flatcar/Flatcar/issues/630), addressing the "Common - Security" requirement.
diff --git a/content/docs/latest/setup/_index.md b/content/docs/latest/setup/_index.md
new file mode 100644
index 00000000..28b021e7
--- /dev/null
+++ b/content/docs/latest/setup/_index.md
@@ -0,0 +1,9 @@
+---
+title: Setup and Operations
+description: >
+ Follow these guides to connect your machines together as a cluster.
+ Configure machine parameters, create users, inject multiple SSH keys, and
+ more with Butane configs.
+weight: 50
+---
+
diff --git a/content/docs/latest/setup/clusters/_index.md b/content/docs/latest/setup/clusters/_index.md
new file mode 100644
index 00000000..1d0533ba
--- /dev/null
+++ b/content/docs/latest/setup/clusters/_index.md
@@ -0,0 +1,9 @@
+---
+title: Managing Clusters
+description: >
+ One of the most common uses of Flatcar Container Linux is to create
+ clusters of machines. These guides will help you understand the different
+ cluster architectures that you can choose from, setup cluster discovery and
+ more.
+weight: 40
+---
diff --git a/content/docs/latest/setup/clusters/architectures.md b/content/docs/latest/setup/clusters/architectures.md
new file mode 100644
index 00000000..577f55c9
--- /dev/null
+++ b/content/docs/latest/setup/clusters/architectures.md
@@ -0,0 +1,233 @@
+---
+title: Cluster Architectures
+linktitle: Architectures
+description: Understanding different cluster sizes, how they get configured, and how machines interact with each other.
+weight: 10
+aliases:
+ - ../../os/cluster-architectures
+ - ../../clusters/creation/cluster-architectures
+---
+
+## Overview
+
+Depending on the size and expected use of your Flatcar Container Linux cluster, you will have different architectural requirements. A few of the common cluster architectures, as well as their strengths and weaknesses, are described below.
+
+Most of these scenarios dedicate a few machines, bare metal or virtual, to running central cluster services. These may include etcd and the distributed controllers for applications like Kubernetes, Mesos, and OpenStack. Isolating these services onto a few known machines helps to ensure they are distributed across cabinets or availability zones. It also helps in setting up static networking to allow for easy bootstrapping. This architecture helps to resolve concerns about relying on a discovery service.
+
+## Docker dev environment on laptop
+
+
+Laptop development environment with Flatcar Container Linux VM
+
+| Cost | Great For | Set Up Time | Production |
+|------|--------------------|-------------|------------|
+| Low | Laptop development | Minutes | No |
+
+If you're developing locally but plan to run containers in production, it's best practice to mirror that environment locally. Run Docker commands on your laptop that control a Flatcar Container Linux VM in VMware Fusion or Virtual box to mirror your container production environment locally.
+
+### Configuring your laptop
+
+Start a single Flatcar Container Linux VM with the Docker remote socket enabled in the Butane Config. Here's what the config looks like:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: docker-tcp.socket
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Docker Socket for the API
+
+ [Socket]
+ ListenStream=2375
+ BindIPv6Only=both
+ Service=docker.service
+
+ [Install]
+ WantedBy=sockets.target
+```
+
+This file is used to provision your local Flatcar Container Linux machine on its first boot. This sets up and enables the Docker API, which is how you can use Docker on your laptop. The Docker CLI manages containers running within the VM, *not* on your personal operating system.
+
+Using the Butane Config Transpiler, or `butane` ([download][butane-download]), convert the above yaml into an [Ignition][ignition-getting-started]. Alternatively, copy the contents of the Igntion tab in the above example. Once you have the Ignition configuration file, pass it to your provider.
+In addition to providers supported by [upstream Ignition][ignition-supported], Flatcar [supports](https://github.com/flatcar/scripts/blob/main/sdk_container/src/third_party/coreos-overlay/sys-apps/ignition/files/0018-revert-internal-oem-drop-noop-OEMs.patch) cloudsigma, hyperv, interoute, niftycloud, rackspace[-onmetal], and vagrant.
+
+Once the local VM is running, tell your Docker binary on your personal operating system to use the remote port by exporting an environment variable and start running Docker commands. Run these commands in a terminal *on your local operating system (MacOS or Linux), not in the Flatcar Container Linux virtual machine*:
+
+```shell
+export DOCKER_HOST=tcp://localhost:2375
+docker ps
+```
+
+This avoids discrepancies between your development and production environments.
+
+### Related local installation tools
+
+There are several different options for testing Flatcar Container Linux locally:
+
+- [Flatcar Container Linux on QEMU][flatcar-qemu] is a feature rich way of running Flatcar Container Linux locally, provisioned by Ignition configs like the one shown above.
+- [Minikube][minikube] is used for local Kubernetes development. This does not use Flatcar Container Linux but is very fast to setup and is the easiest way to test-drive use Kubernetes.
+
+## Small cluster
+
+
+Small Flatcar Container Linux cluster running etcd on all machines
+
+| Cost | Great For | Set Up Time | Production |
+|------|--------------------------------------------|-------------|------------|
+| Low | Small clusters, trying out Flatcar Container Linux | Minutes | Yes |
+
+For small clusters, between 3-9 machines, running etcd on all of the machines allows for high availability without paying for extra machines that just run etcd.
+
+Getting started is easy — a single Butane Config can be used to provision all machines in your environment.
+
+Once you have a small cluster up and running, you can install a Kubernetes on the cluster. You can do this easily using [Typhoon][typhoon].
+
+### Configuring the machines
+
+For more information on getting started with this architecture, see the Flatcar Container Linux documentation on [supported platforms][flatcar-supported]. These include [Amazon EC2][flatcar-ec2], [Equinix Metal][flatcar-equinix-metal], [Azure][flatcar-azure], [Google Compute Platform][flatcar-gce], [bare metal iPXE][flatcar-bm], [Digital Ocean][flatcar-do], and many more community supported platforms.
+
+Boot the desired number of machines with the same Butane Config and discovery token. The Butane Config specifies which services will be started on each machine.
+
+## Easy development/testing cluster
+
+
+Flatcar Container Linux cluster optimized for development and testing
+
+| Cost | Great For | Set Up Time | Production |
+|------|-----------|-------------|------------|
+| Low | Development/Testing | Minutes | No |
+
+When getting started with Flatcar Container Linux, it's common to frequently boot, reboot, and destroy machines while tweaking your configuration. To avoid the need to generate new discovery URLs and bootstrap etcd, start a single etcd node, and build your cluster around it.
+
+You can now boot as many machines as you'd like as test workers that read from the etcd node. All the features of Locksmith and etcdctl will continue to work properly but will connect to the etcd node instead of using a local etcd instance. Since etcd isn't running on all of the machines you'll gain a little bit of extra CPU and RAM to play with.
+
+You can easily provision the remaining (non-etcd) nodes with Kubernetes using [Typhoon][typhoon] to start running containerized app with your cluster.
+
+Once this environment is set up, it's ready to be tested. Destroy a machine, and watch Kubernetes reschedule the units, max out the CPU, and rebuild your setup automatically.
+
+### Configuration for etcd role
+
+Since we're only using a single etcd node, there is no need to include a discovery token. There isn't any high availability for etcd in this configuration, but that's assumed to be OK for development and testing. Boot this machine first so you can configure the rest with its IP address, which is specified with the networkd unit.
+
+The networkd unit is typically used for bare metal installations that require static networking. See your provider's documentation for specific examples.
+
+Here's the Butane Config for the etcd machine:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: etcd-member.service
+ enabled: true
+ dropins:
+ - name: 20-clct-etcd-member.conf
+ contents: |
+ [Unit]
+ Requires=coreos-metadata.service
+ After=coreos-metadata.service
+ [Service]
+ Environment=ETCD_IMAGE_TAG=v3.1.5
+ Environment="ETCD_NAME=etcdserver"
+ ExecStart=
+ ExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \
+ --name="etcdserver" \
+ --listen-peer-urls="http://0.0.0.0:2380" \
+ --listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
+ --initial-advertise-peer-urls="http://10.0.0.101:2380" \
+ --initial-cluster="etcdserver=http://10.0.0.101:2380" \
+ --advertise-client-urls="http://10.0.0.101:2379"
+storage:
+ files:
+ - path: /etc/systemd/network/00-eth0.network
+ contents:
+ inline: |
+ [Match]
+ Name=eth0
+
+ [Network]
+ DNS=1.2.3.4
+ Address=10.0.0.101/24
+ Gateway=10.0.0.1
+```
+
+### Configuration for worker role
+
+This architecture allows you to boot any number of workers, from a single unit to a large cluster designed for load testing. The notable configuration difference for this role is specifying that applications like Kubernetes should use our etcd proxy instead of starting etcd server locally.
+
+## Production cluster with central services
+
+
+Flatcar Container Linux cluster separated into central services and workers.
+
+| Cost | Great For | Set Up Time | Production |
+|------|-----------|-------------|------------|
+| High | Large bare-metal installations | Hours | Yes |
+
+For large clusters, it's recommended to set aside 3-5 machines to run central services. Once those are set up, you can boot as many workers as you wish. Each of the workers will use your distributed etcd cluster on the central machines via local etcd proxies. This is explained in greater depth below.
+
+### Configuration for central services role
+
+Our central services machines will run services like etcd and Kubernetes controllers that support the rest of the cluster. etcd is configured with static networking and a peers list.
+
+Here's an example Butane Config for one of the central service machines. Be sure to generate a new discovery token with the initial size of your cluster:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: etcd-member.service
+ enabled: true
+ dropins:
+ - name: 20-clct-etcd-member.conf
+ contents: |
+ [Unit]
+ Requires=coreos-metadata.service
+ After=coreos-metadata.service
+ [Service]
+ Environment=ETCD_IMAGE_TAG=v3.1.5
+ Environment="ETCD_NAME=etcdserver"
+ ExecStart=
+ ExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \
+ --name="etcdserver" \
+ --listen-peer-urls="http://10.0.0.101:2380" \
+ --listen-client-urls="http://0.0.0.0:2379" \
+ --initial-advertise-peer-urls="http://10.0.0.101:2380" \
+ --initial-cluster="etcdserver=http://10.0.0.101:2380" \
+ --advertise-client-urls="http://10.0.0.101:2379" \
+ --discovery="https://discovery.etcd.io/"
+# generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3
+# specify the initial size of your cluster with ?size=X
+storage:
+ files:
+ - path: /etc/systemd/network/00-eth0.network
+ contents:
+ inline: |
+ [Match]
+ Name=eth0
+
+ [Network]
+ DNS=1.2.3.4
+ Address=10.0.0.101/24
+ Gateway=10.0.0.1
+```
+
+[butane-download]: https://github.com/coreos/butane/releases
+[ignition-getting-started]: https://github.com/coreos/ignition/blob/main/docs/getting-started.md
+[ignition-supported]: https://github.com/coreos/ignition/blob/main/docs/supported-platforms.md
+[flatcar-qemu]: ../../installing/vms/qemu
+[minikube]: https://github.com/kubernetes/minikube
+[nebraska-update]: https://github.com/kinvolk/nebraska
+[flatcar-channels]: https://www.flatcar-linux.org/releases/
+[flatcar-supported]: ../../
+[flatcar-ec2]: ../../installing/cloud/aws-ec2
+[flatcar-equinix-metal]: ../../installing/cloud/equinix-metal
+[flatcar-azure]: ../../installing/cloud/azure
+[flatcar-gce]: ../../installing/cloud/gcp
+[flatcar-do]: ../../installing/cloud/digitalocean
+[flatcar-bm]: ../../installing/bare-metal/booting-with-ipxe
+[typhoon]: https://github.com/poseidon/typhoon
diff --git a/content/docs/latest/setup/clusters/booting-on-ecs.md b/content/docs/latest/setup/clusters/booting-on-ecs.md
new file mode 100644
index 00000000..77af0dc2
--- /dev/null
+++ b/content/docs/latest/setup/clusters/booting-on-ecs.md
@@ -0,0 +1,101 @@
+---
+title: Running Flatcar Container Linux with AWS EC2 Container Service
+linktitle: Using AWS ECS
+description: How to setup AWS ECS clusters using Flatcar.
+weight: 30
+aliases:
+ - ../../os/booting-on-ecs
+ - ../../clusters/management/booting-on-ecs
+---
+
+[Amazon EC2 Container Service (ECS)][aws-ecs] is a container management service which provides a set of APIs for scheduling container workloads across EC2 clusters. It supports Flatcar Container Linux with Docker containers.
+
+Your Flatcar Container Linux machines communicate with ECS via an agent. The agent interacts with Docker to start new containers and gather information about running containers.
+
+## Set up a new cluster
+
+When booting your [Flatcar Container Linux Machines on EC2][boot-ec2], configure the ECS agent to be started via [Ignition][ignition-docs].
+
+Be sure to change `ECS_CLUSTER` to the cluster name you've configured via the ECS CLI or leave it empty for the default. Here's a full config example:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /var/lib/iptables/rules-save
+ mode: 0644
+ contents:
+ inline: |
+ *nat
+ -A PREROUTING -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 127.0.0.1:51679
+ -A OUTPUT -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
+ COMMIT
+ - path: /etc/sysctl.d/localnet.conf
+ mode: 0644
+ contents:
+ inline: |
+ net.ipv4.conf.all.route_localnet=1
+
+systemd:
+ units:
+ - name: iptables-restore.service
+ enabled: true
+ - name: systemd-sysctl.service
+ enabled: true
+ - name: amazon-ecs-agent.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=AWS ECS Agent
+ Documentation=https://docs.aws.amazon.com/AmazonECS/latest/developerguide/
+ Requires=docker.socket
+ After=docker.socket
+
+ [Service]
+ Environment=ECS_CLUSTER=your_cluster_name
+ Environment=ECS_LOGLEVEL=info
+ Environment=ECS_VERSION=latest
+ Restart=on-failure
+ RestartSec=30
+ RestartPreventExitStatus=5
+ SyslogIdentifier=ecs-agent
+ ExecStartPre=-/bin/mkdir -p /var/log/ecs /var/ecs-data /etc/ecs
+ ExecStartPre=-/usr/bin/touch /etc/ecs/ecs.config
+ ExecStartPre=-/usr/bin/docker kill ecs-agent
+ ExecStartPre=-/usr/bin/docker rm ecs-agent
+ ExecStartPre=/usr/bin/docker pull amazon/amazon-ecs-agent:${ECS_VERSION}
+ ExecStart=/usr/bin/docker run \
+ --name ecs-agent \
+ --env-file=/etc/ecs/ecs.config \
+ --volume=/var/run/docker.sock:/var/run/docker.sock \
+ --volume=/var/log/ecs:/log \
+ --volume=/var/ecs-data:/data \
+ --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
+ --volume=/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro \
+ --publish=127.0.0.1:51678:51678 \
+ --publish=127.0.0.1:51679:51679 \
+ --env=ECS_AVAILABLE_LOGGING_DRIVERS='["awslogs","json-file","journald","logentries","splunk","syslog"]' \
+ --env=ECS_ENABLE_TASK_IAM_ROLE=true \
+ --env=ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true \
+ --env=ECS_LOGFILE=/log/ecs-agent.log \
+ --env=ECS_LOGLEVEL=${ECS_LOGLEVEL} \
+ --env=ECS_DATADIR=/data \
+ --env=ECS_CLUSTER=${ECS_CLUSTER} \
+ amazon/amazon-ecs-agent:${ECS_VERSION}
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+The example above pulls the latest official Amazon ECS agent container from the Docker Hub when the machine starts. If you ever need to update the agent, it’s as simple as restarting the amazon-ecs-agent service or the Flatcar Container Linux machine.
+
+If you want to configure SSH keys in order to log in, mount disks or configure other options, see the [Butane config documentation][butane-configs].
+
+For more information on using ECS, check out the [official Amazon documentation][ecs-docs].
+
+[aws-ecs]: http://aws.amazon.com/ecs/
+[boot-ec2]: ../../installing/cloud/aws-ec2
+[butane-configs]: ../../provisioning/config-transpiler
+[ignition-docs]: ../../provisioning/ignition
+[ecs-docs]: http://aws.amazon.com/documentation/ecs/
diff --git a/content/docs/latest/setup/clusters/discovery.md b/content/docs/latest/setup/clusters/discovery.md
new file mode 100644
index 00000000..21aadf23
--- /dev/null
+++ b/content/docs/latest/setup/clusters/discovery.md
@@ -0,0 +1,166 @@
+---
+title: Cluster discovery
+description: How to configure etcd so that cluster discovery works on your Flatcar clusters.
+weight: 10
+aliases:
+ - ../../os/cluster-discovery
+ - ../../clusters/creation/cluster-discovery
+---
+
+## Overview
+
+Flatcar Container Linux uses etcd, a service running on each machine, to handle coordination between software running on the cluster. For a group of Flatcar Container Linux machines to form a cluster, their etcd instances need to be connected.
+
+A discovery service, [https://discovery.etcd.io](https://discovery.etcd.io), is provided as a free service to help connect etcd instances together by storing a list of peer addresses, metadata and the initial size of the cluster under a unique address, known as the discovery URL. You can generate them very easily:
+
+```shell
+$ curl -w "\n" 'https://discovery.etcd.io/new?size=3'
+https://discovery.etcd.io/6a28e078895c5ec737174db2419bb2f3
+```
+
+The discovery URL can be provided to each Flatcar Container Linux machine
+via [Butane Configs](../../provisioning/config-transpiler). The rest of this guide will
+explain what's happening behind the scenes, but if you're trying to get
+clustered as quickly as possible, all you need to do is provide a _fresh,
+unique_ discovery token in your config.
+
+Boot each one of the machines with identical Butane Config and they should be automatically clustered:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: etcd-member.service
+ enabled: true
+ dropins:
+ - name: 20-clct-etcd-member.conf
+ contents: |
+ [Unit]
+ Requires=coreos-metadata.service
+ After=coreos-metadata.service
+ [Service]
+ EnvironmentFile=/run/metadata/flatcar
+ ExecStart=
+ ExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \
+ --listen-peer-urls="http://${COREOS_CUSTOM_PRIVATE_IPV4}:2380" \
+ --listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
+ --initial-advertise-peer-urls="http://${COREOS_CUSTOM_PRIVATE_IPV4}:2380" \
+ --advertise-client-urls="http://${COREOS_CUSTOM_PRIVATE_IPV4}:2379,http://${COREOS_CUSTOM_PRIVATE_IPV4}:4001" \
+ --discovery="https://discovery.etcd.io/"
+```
+
+Note that you need to generate a new token for each unique cluster from https://discovery.etcd.io/new?size=3 where you specify the initial size of your cluster with `?size=X`.
+The used variable name needs to be changed to match those that Afterburn uses for your platform.
+Multi-region and multi-cloud deployments need to use the public IP address.
+The configuration listens on both the official ports and the legacy ports.
+Legacy ports can be omitted if your application doesn't depend on them.
+
+Specific documentation are provided for each platform's guide.
+
+## New clusters
+
+Starting a Flatcar Container Linux cluster requires one of the new machines to become the first leader of the cluster. The initial leader is stored as metadata with the discovery URL in order to inform the other members of the new cluster. Let's walk through a timeline a new three-machine Flatcar Container Linux cluster discovering each other:
+
+1. All three machines are booted via a cloud-provider with the same config in the user-data.
+2. Machine 1 starts up first. It requests information about the cluster from the discovery token and submits its `-initial-advertise-peer-urls` address `10.10.10.1`.
+3. No state is recorded into the discovery URL metadata, so machine 1 becomes the leader and records the state as `started`.
+4. Machine 2 boots and submits its `-initial-advertise-peer-urls` address `10.10.10.2`. It also reads back the list of existing peers (only `10.10.10.1`) and attempts to connect to the address listed.
+5. Machine 2 connects to Machine 1 and is now part of the cluster as a follower.
+6. Machine 3 boots and submits its `-initial-advertise-peer-urls` address `10.10.10.3`. It reads back the list of peers (`10.10.10.1` and `10.10.10.2`) and selects one of the addresses to try first. When it connects to a machine in the cluster, the machine is given a full list of the existing other members of the cluster.
+7. The cluster is now bootstrapped with an initial leader and two followers.
+
+There are a few interesting things happening during this process.
+
+First, each machine is configured with the same discovery URL and etcd figured out what to do. This allows you to load the same Butane Config into an auto-scaling group and it will work whether it is the first or 30th machine in the group.
+
+Second, machine 3 only needed to use one of the addresses stored in the discovery URL to connect to the cluster. Since etcd uses the Raft consensus algorithm, existing machines in the cluster already maintain a list of healthy members in order for the algorithm to function properly. This list is given to the new machine and it starts normal operations with each of the other cluster members.
+
+Third, if you specified `?size=3` upon discovery URL creation, any other machines that join the cluster in the future will automatically start as etcd proxies.
+
+## Common problems with cluster discovery
+
+### Existing clusters
+
+[Do not use the public discovery service to reconfigure a running etcd cluster.][etcd-reconf-no-disc] The public discovery service is a convenience for bootstrapping new clusters, especially on cloud providers with dynamic IP assignment, but is not designed for the later case when the cluster is running and member IPs are known.
+
+To promote proxy members or join new members into an existing etcd cluster, configure static discovery and add members. The [etcd cluster reconfiguration guide][etcd-reconf-on-flatcar] details the steps for performing this reconfiguration on Flatcar Container Linux systems that were originally deployed with public discovery. The more general [etcd cluster reconfiguration document][etcd-reconf] explains the operations for removing and adding cluster members in a cluster already configured with static discovery.
+
+### Stale tokens
+
+A common problem with cluster discovery is attempting to boot a new cluster with a stale discovery URL. As explained above, the initial leader election is recorded into the URL, which indicates that the new etcd instance should be joining an existing cluster.
+
+If you provide a stale discovery URL, the new machines will attempt to connect to each of the old peer addresses, which will fail since they don't exist, and the bootstrapping process will fail.
+
+If you're thinking, why can't the new machines just form a new cluster if they're all down. There's a really great reason for this — if an etcd peer was in a network partition, it would look exactly like the "full-down" situation and starting a new cluster would form a split-brain. Since etcd will never be able to determine whether a token has been reused or not, it must assume the worst and abort the cluster discovery.
+
+If you're running into problems with your discovery URL, there are a few sources of information that can help you see what's going on. First, you can open the URL in a browser to see what information etcd is using to bootstrap itself:
+
+```json
+{
+ action: "get",
+ node: {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119",
+ dir: true,
+ nodes: [
+ {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119/0d79b4791be9688332cc05367366551e",
+ value: "http://10.183.202.105:7001",
+ expiration: "2014-08-17T16:21:37.426001686Z",
+ ttl: 576008,
+ modifiedIndex: 72783864,
+ createdIndex: 72783864
+ },
+ {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119/c72c63ffce6680737ea2b670456aaacd",
+ value: "http://10.65.177.56:7001",
+ expiration: "2014-08-17T12:05:57.717243529Z",
+ ttl: 560669,
+ modifiedIndex: 72626400,
+ createdIndex: 72626400
+ },
+ {
+ key: "/_etcd/registry/506f6c1bc729377252232a0121247119/f7a93d1f0cd4d318c9ad0b624afb9cf9",
+ value: "http://10.29.193.50:7001",
+ expiration: "2014-08-17T17:18:25.045563473Z",
+ ttl: 579416,
+ modifiedIndex: 72821950,
+ createdIndex: 72821950
+ }
+ ],
+ modifiedIndex: 69367741,
+ createdIndex: 69367741
+ }
+}
+```
+
+To rule out firewall settings as a source of your issue, ensure that you can curl each of the IPs from machines in your cluster.
+
+If all of the IPs can be reached, the etcd log can provide more clues:
+
+```shell
+journalctl -u etcd-member
+```
+
+### Communicating with discovery.etcd.io
+
+If your Flatcar Container Linux cluster can't communicate out to the public internet, [https://discovery.etcd.io](https://discovery.etcd.io) won't work and you'll have to run your own discovery endpoint, which is described below.
+
+### Setting advertised client addresses correctly
+
+Each etcd instance submits the list of `-initial-advertise-peer-urls` of each etcd instance to the configured discovery service. It's important to select an address that *all* peers in the cluster can communicate with. If you are configuring a list of addresses, make sure each member can communicate with at least one of the addresses.
+
+For example, if you're located in two regions of a cloud provider, configuring a private `10.x` address will not work between the two regions, and communication will not be possible between all peers. The `-listen-client-urls` flag allows you to bind to a specific list of interfaces and ports (or all interfaces) to ensure your etcd traffic is routed properly.
+
+## Running your own discovery service
+
+The public discovery service is just an etcd cluster made available to the public internet. Since the discovery service conducts and stores the result of the first leader election, it needs to be consistent. You wouldn't want two machines in the same cluster to think they were both the leader.
+
+Since etcd is designed to this type of leader election, it was an obvious choice to use it for everyone's initial leader election. This means that it's easy to run your own etcd cluster for this purpose.
+
+If you're interested in how discovery API works behind the scenes in etcd, read about [etcd clustering][etcd-clustering].
+
+[etcd-reconf]: https://etcd.io/docs/v3.4.0/op-guide/runtime-configuration/
+[etcd-reconf-no-disc]: https://etcd.io/docs/v3.4.0/op-guide/runtime-reconf-design/#do-not-use-public-discovery-service-for-runtime-reconfiguration
+[etcd-clustering]: https://etcd.io/docs/v3.4.0/op-guide/clustering/
+[etcd-reconf-on-flatcar]: https://github.com/coreos/docs/blob/master/etcd/etcd-live-cluster-reconfiguration.md
diff --git a/content/docs/latest/setup/customization/ACPI.md b/content/docs/latest/setup/customization/ACPI.md
new file mode 100644
index 00000000..5390013f
--- /dev/null
+++ b/content/docs/latest/setup/customization/ACPI.md
@@ -0,0 +1,70 @@
+---
+title: Handle ACPI events
+linktitle: ACPI
+description: Enable acpid and handle ACPI events
+weight: 60
+---
+
+## acpid
+
+Beginning with Flatcar major release 3255 `acpid` can be enabled at boot with Ignition.
+
+This can be configured with a [butane][butane] definition:
+
+```yaml
+---
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: acpid.service
+ enabled: true
+storage:
+ files:
+ - path: /etc/acpi/events/default
+ contents:
+ inline: |
+ event=.*
+ action=/etc/acpi/default.sh %e
+
+ - path: /etc/acpi/default.sh
+ contents:
+ inline: |
+ set $*
+ logger "ACPI event handled: $*"
+ mode: 0744
+```
+
+This simple configuration will only log the handled ACPI events, example with QEMU:
+
+```bash
+butane < config.yml > ignition.json
+./flatcar_production_qemu.sh -i ./ignition.json -- -qmp tcp:localhost:4444,server,wait=off
+```
+
+From another terminal, it's possible to send a shutdown signal for example:
+```bash
+telnet localhost 4444
+{ "execute": "qmp_capabilities" }
+{ "execute": "system_powerdown" }
+```
+
+From the `acpid` logs, it's possible to see the logger in action:
+```bash
+$ journalctl --unit acpid.service
+May 24 14:29:36 localhost systemd[1]: Started ACPI event daemon.
+May 24 14:29:36 localhost acpid[928]: starting up with netlink and the input layer
+May 24 14:29:36 localhost acpid[928]: 1 rule loaded
+May 24 14:29:36 localhost acpid[928]: waiting for events: event logging is off
+May 24 14:30:20 localhost root[1041]: ACPI event handled: button/power PBTN 00000080 00000000
+May 24 14:30:20 localhost systemd[1]: Stopping ACPI event daemon...
+May 24 14:30:20 localhost acpid[928]: exiting
+May 24 14:30:20 localhost systemd[1]: acpid.service: Deactivated successfully.
+May 24 14:30:20 localhost systemd[1]: Stopped ACPI event daemon.
+```
+
+## qemu-guest-agent
+
+Beginning with Flatcar major release 3402, qemu-guest-agent is part of all images and can handle certain lifecycle operations without acpid. The agent service will automatically be enabled if a virtio-port with the name `org.qemu.guest_agent.0` is detected. For Openstack it is necessary to launch the instance with `hw_qemu_guest_agent=yes` set.
+
+[butane]: ../../provisioning/ignition/specification/#ignition-v3
diff --git a/content/docs/latest/setup/customization/_index.md b/content/docs/latest/setup/customization/_index.md
new file mode 100644
index 00000000..5ae4278d
--- /dev/null
+++ b/content/docs/latest/setup/customization/_index.md
@@ -0,0 +1,11 @@
+---
+title: Customizing Flatcar
+linktitle: Common Customizations
+description: >
+ Guides and examples on typical customizations done on Flatcar instances.
+ Including managing users, the DNS configuration, kernel modules, network
+ parameters and more.
+weight: 10
+aliases:
+ - ../clusters/customization
+---
diff --git a/content/docs/latest/setup/customization/adding-users.md b/content/docs/latest/setup/customization/adding-users.md
new file mode 100644
index 00000000..697d0a20
--- /dev/null
+++ b/content/docs/latest/setup/customization/adding-users.md
@@ -0,0 +1,102 @@
+---
+title: Adding users
+description: How to create additional user accounts, either manually or with Butane configs.
+weight: 10
+aliases:
+ - ../../os/adding-users
+ - ../../clusters/customization/adding-users
+---
+
+You can create user accounts on a Flatcar Container Linux machine manually with `useradd` or via a [Butane Config][butane-config] when the machine is created.
+
+## Add Users via Butane Configs
+
+In your Butane Config, you can specify many [different parameters][config-spec] for each user. Here's an example:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq......."
+ - name: elroy
+ password_hash: "$6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm..."
+ ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq......."
+ groups: [ sudo, docker ]
+```
+
+Because `usermod` does not work to add a user to a predefined system group, you can use [systemd-userdb][systemd-userdb] to define membership. Here's the same example with userdb:
+
+```
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: elroy
+ password_hash: "$6$5s2u6/jR$un0AvWnqilcgaNB3Mkxd5yYv6mTlWfOoCYHZmfi3LDKVltj.E8XNKEcwWm..."
+ ssh_authorized_keys:
+ - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGdByTgSVHq......."
+storage:
+ files:
+ - path: /etc/userdb/elroy:sudo.membership
+ contents:
+ inline: " "
+ - path: /etc/userdb/elroy:docker.membership
+ contents:
+ inline: " "
+```
+
+## Add user manually
+
+If you'd like to add a user manually, SSH to the machine and use the `useradd` tool. To create the user `user`, run:
+
+```shell
+sudo useradd -p "*" -U -m user1 -G sudo
+```
+
+The `"*"` creates a user that cannot login with a password but can log in via SSH key. `-U` creates a group for the user, `-G` adds the user to the existing `sudo` group and `-m` creates a home directory. If you'd like to add a password for the user, run:
+
+```shell
+$ sudo passwd user1
+New password:
+Re-enter new password:
+passwd: password changed.
+```
+
+To assign an SSH key, run:
+
+```shell
+update-ssh-keys -u user1 -a user1 user1.pem
+```
+
+## Grant sudo Access
+
+If you trust the user, you can grant administrative privileges using `visudo`. `visudo` checks the file syntax before actually overwriting the `sudoers` file. This command should be run as root to avoid losing sudo access in the event of a failure. Instead of editing `/etc/sudo.conf` directly you will create a new file under the `/etc/sudoers.d/` directory. When you run visudo, it is required that you specify which file you are attempting to edit with the `-f` argument:
+
+```shell
+# visudo -f /etc/sudoers.d/user1
+```
+
+Add a the line:
+
+```text
+user1 ALL=(ALL) NOPASSWD: ALL
+```
+
+Check that sudo has been granted:
+
+```shell
+# su user1
+$ cat /etc/sudoers.d/user1
+cat: /etc/sudoers.d/user1: Permission denied
+
+$ sudo cat /etc/sudoers.d/user1
+user1 ALL=(ALL) NOPASSWD: ALL
+```
+
+[cl-config]: ../../provisioning/config-transpiler
+[config-spec]: ../../provisioning/config-transpiler/configuration
+[systemd-userdb]: https://www.freedesktop.org/software/systemd/man/systemd-userdbd.service.html
diff --git a/content/docs/latest/setup/customization/configuring-date-and-timezone.md b/content/docs/latest/setup/customization/configuring-date-and-timezone.md
new file mode 100644
index 00000000..227c559b
--- /dev/null
+++ b/content/docs/latest/setup/customization/configuring-date-and-timezone.md
@@ -0,0 +1,216 @@
+---
+title: Configuring date and time zone
+description: How to configure date, timezone and time synchronization.
+weight: 20
+aliases:
+ - ../../os/configuring-date-and-timezone
+ - ../../clusters/customization/configuring-date-and-timezone
+---
+
+By default, Flatcar Container Linux machines keep time in the Coordinated Universal Time (UTC) zone and synchronize their clocks with the Network Time Protocol (NTP). This page contains information about customizing those defaults, explains the change in NTP client daemons in recent Flatcar Container Linux versions, and offers advice on best practices for timekeeping in Flatcar Container Linux clusters.
+
+## Viewing and changing time and date
+
+The [`timedatectl(1)`][timedatectl] command displays and sets the date, time, and time zone.
+
+```shell
+$ timedatectl status
+ Local time: Wed 2015-08-26 19:29:12 UTC
+ Universal time: Wed 2015-08-26 19:29:12 UTC
+ RTC time: Wed 2015-08-26 19:29:12
+ Time zone: UTC (UTC, +0000)
+ Network time on: no
+NTP synchronized: yes
+ RTC in local TZ: no
+ DST active: n/a
+```
+
+### Recommended: UTC time
+
+To avoid time zone confusion and the complexities of adjusting clocks for daylight saving time (or not) in accordance with regional custom, we recommend that all machines in Flatcar Container Linux clusters use UTC. This is the default time zone. To reset a machine to this default:
+
+```shell
+sudo timedatectl set-timezone UTC
+```
+
+### Changing the time zone
+
+If your site or application requires a different system time zone, start by listing the available options:
+
+```shell
+$ timedatectl list-timezones
+Africa/Abidjan
+Africa/Accra
+Africa/Addis_Ababa
+…
+```
+
+Pick a time zone from the list and set it:
+
+```shell
+sudo timedatectl set-timezone America/New_York
+```
+
+Check the changes:
+
+```shell
+$ timedatectl
+ Local time: Wed 2015-08-26 15:44:07 EDT
+ Universal time: Wed 2015-08-26 19:44:07 UTC
+ RTC time: Wed 2015-08-26 19:44:07
+ Time zone: America/New_York (EDT, -0400)
+ Network time on: no
+NTP synchronized: yes
+ RTC in local TZ: no
+ DST active: yes
+ Last DST change: DST began at
+ Sun 2015-03-08 01:59:59 EST
+ Sun 2015-03-08 03:00:00 EDT
+ Next DST change: DST ends (the clock jumps one hour backwards) at
+ Sun 2015-11-01 01:59:59 EDT
+ Sun 2015-11-01 01:00:00 EST
+```
+
+## Time synchronization
+
+Flatcar Container Linux clusters use NTP to synchronize the clocks of member nodes, and all machines start an NTP client at boot. The operating system uses [`systemd-timesyncd(8)`][systemd-timesyncd] as the default NTP client. Use `systemctl` to check which service is running:
+
+```shell
+$ systemctl status systemd-timesyncd ntpd
+● systemd-timesyncd.service - Network Time Synchronization
+ Loaded: loaded (/usr/lib64/systemd/system/systemd-timesyncd.service; disabled; vendor preset: disabled)
+ Active: active (running) since Thu 2015-05-14 05:43:20 UTC; 5 days ago
+ Docs: man:systemd-timesyncd.service(8)
+ Main PID: 480 (systemd-timesyn)
+ Status: "Using Time Server 169.254.169.254:123 (169.254.169.254)."
+ Memory: 448.0K
+ CGroup: /system.slice/systemd-timesyncd.service
+ └─480 /usr/lib/systemd/systemd-timesyncd
+
+● ntpd.service - Network Time Service
+ Loaded: loaded (/usr/lib64/systemd/system/ntpd.service; disabled; vendor preset: disabled)
+ Active: inactive (dead)
+```
+
+### Recommended NTP sources
+
+Unless you have a highly reliable and precise time server pool, use your cloud provider's NTP source, or, on bare metal, the default Flatcar Container Linux NTP servers:
+
+```text
+0.flatcar.pool.ntp.org
+1.flatcar.pool.ntp.org
+2.flatcar.pool.ntp.org
+3.flatcar.pool.ntp.org
+```
+
+### Changing NTP time sources
+
+`Systemd-timesyncd` can discover NTP servers from DHCP, individual [network][systemd.network] configs, the file [`timesyncd.conf`][timesyncd.conf], or the default `*.flatcar.pool.ntp.org` pool.
+
+The default behavior uses NTP servers provided by DHCP. To disable this, write a configuration listing your preferred NTP servers into the file `/etc/systemd/network/50-dhcp-no-ntp.conf`:
+
+```ini
+[Network]
+DHCP=v4
+NTP=0.pool.example.com 1.pool.example.com
+
+[DHCP]
+UseMTU=true
+UseDomains=true
+UseNTP=false
+```
+
+Then restart the network daemon:
+
+```shell
+sudo systemctl restart systemd-networkd
+```
+
+NTP time sources can be set in `timesyncd.conf` with a [Butane Config][butane-configs] snippet like:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/timesyncd.conf
+ mode: 0644
+ contents:
+ inline: |
+ [Time]
+ NTP=0.pool.example.com 1.pool.example.com
+```
+
+## Switching from timesyncd to ntpd
+
+You can switch from `systemd-timesyncd` to `ntpd` with the following commands:
+
+```shell
+sudo systemctl stop systemd-timesyncd
+sudo systemctl mask systemd-timesyncd
+sudo systemctl enable ntpd
+sudo systemctl start ntpd
+```
+
+or with this Butane Config snippet:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: systemd-timesyncd.service
+ mask: true
+ - name: ntpd.service
+ enabled: true
+```
+
+Because `timesyncd` and `ntpd` are mutually exclusive, it's important to `mask` the `systemd-timesyncd` service. `systemctl disable` or `stop` alone will not prevent a default service from starting again.
+
+### Configuring ntpd
+
+The `ntpd` service reads all configuration from the file `/etc/ntp.conf`. It does not use DHCP or other configuration sources. To use a different set of NTP servers, replace the `/etc/ntp.conf` symlink with something like the following:
+
+```text
+server 0.pool.example.com
+server 1.pool.example.com
+
+restrict default nomodify nopeer noquery limited kod
+restrict 127.0.0.1
+restrict [::1]
+```
+
+Then ask `ntpd` to reload its configuration:
+
+```shell
+sudo systemctl reload ntpd
+```
+
+Or, in a [Butane Config][butane-configs]:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/ntp.conf
+ overwrite: true
+ mode: 0644
+ contents:
+ inline: |
+ server 0.pool.example.com
+ server 1.pool.example.com
+
+ # - Allow only time queries, at a limited rate.
+ # - Allow all local queries (IPv4, IPv6)
+ restrict default nomodify nopeer noquery limited kod
+ restrict 127.0.0.1
+ restrict [::1]
+```
+
+[timedatectl]: http://www.freedesktop.org/software/systemd/man/timedatectl.html
+[ntp.org]: http://ntp.org/
+[systemd-timesyncd]: http://www.freedesktop.org/software/systemd/man/systemd-timesyncd.service.html
+[systemd.network]: http://www.freedesktop.org/software/systemd/man/systemd.network.html
+[timesyncd.conf]: http://www.freedesktop.org/software/systemd/man/timesyncd.conf.html
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/setup/customization/configuring-dns.md b/content/docs/latest/setup/customization/configuring-dns.md
new file mode 100644
index 00000000..c0e007a3
--- /dev/null
+++ b/content/docs/latest/setup/customization/configuring-dns.md
@@ -0,0 +1,55 @@
+---
+title: DNS Configuration
+description: How DNS resolution works and how to setup local DNS caching.
+weight: 30
+aliases:
+ - ../../os/configuring-dns
+ - ../../clusters/customization/configuring-dns
+---
+
+By default, DNS resolution on Flatcar Container Linux is handled through `/etc/resolv.conf`, which is a symlink to `/run/systemd/resolve/resolv.conf`. This file is managed by [systemd-resolved][systemd-resolved]. Normally, `systemd-resolved` gets DNS IP addresses from [systemd-networkd][systemd-networkd], either via DHCP or static configuration. DNS IP addresses can also be set via `systemd-resolved`'s [resolved.conf][resolved.conf]. See [Network configuration with networkd][networkd-config] for more information on `systemd-networkd`.
+
+## Using a local DNS cache
+
+`systemd-resolved` includes a caching DNS resolver. To use it for DNS resolution and caching, you must enable it via [nsswitch.conf][nsswitch.conf] by adding `resolve` to the `hosts` section.
+
+Here is an example [Butane Config][butane-configs] snippet to do that:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/nsswitch.conf
+ mode: 0644
+ contents:
+ inline: |
+ # /etc/nsswitch.conf:
+
+ passwd: files usrfiles
+ shadow: files usrfiles
+ group: files usrfiles
+
+ hosts: files usrfiles resolve dns
+ networks: files usrfiles dns
+
+ services: files usrfiles
+ protocols: files usrfiles
+ rpc: files usrfiles
+
+ ethers: files
+ netmasks: files
+ netgroup: files
+ bootparams: files
+ automount: files
+ aliases: files
+```
+
+Only nss-aware applications can take advantage of the `systemd-resolved` cache. Notably, this means that statically linked Go programs and programs running within Docker/rkt will use `/etc/resolv.conf` only, and will not use the `systemd-resolve` cache.
+
+[systemd-resolved]: http://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html
+[systemd-networkd]: http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html
+[resolved.conf]: http://www.freedesktop.org/software/systemd/man/resolved.conf.html
+[nsswitch.conf]: http://man7.org/linux/man-pages/man5/nsswitch.conf.5.html
+[butane-configs]: ../../provisioning/config-transpiler
+[networkd-config]: network-config-with-networkd
diff --git a/content/docs/latest/setup/customization/customize-etcd-unit.md b/content/docs/latest/setup/customization/customize-etcd-unit.md
new file mode 100644
index 00000000..e5bf52fa
--- /dev/null
+++ b/content/docs/latest/setup/customization/customize-etcd-unit.md
@@ -0,0 +1,99 @@
+---
+title: Customizing the etcd unit
+description: How to setup etcd to use client certificates.
+weight: 50
+aliases:
+ - ../../os/customize-etcd-unit
+ - ../../clusters/customization/customize-etcd-unit
+---
+
+The etcd systemd unit can be customized by overriding the unit that ships with the default Flatcar Container Linux settings. Common use-cases for doing this are covered below.
+
+## Use client certificates
+
+etcd supports client certificates as a way to provide secure communication between clients ↔ leader and internal traffic between etcd peers in the cluster. Configuring certificates for both scenarios is done through a Butane Config. Options provided here will augment the unit that ships with Flatcar Container Linux.
+
+Please follow the [instructions][self-signed-howto] on how to create self-signed certificates and private keys.
+
+Note that more etcd settings are needed for a proper configuration.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: etcd-member.service
+ enabled: true
+ dropins:
+ - name: 20-clct-etcd-member.conf
+ contents: |
+ [Service]
+ ExecStart=
+ ExecStart=/usr/lib/coreos/etcd-wrapper $ETCD_OPTS \
+ --ca-file="/path/to/CA.pem" \
+ --cert-file="/path/to/server.crt" \
+ --key-file="/path/to/server.key" \
+ --peer-ca-file="/path/to/CA.pem" \
+ --peer-cert-file="/path/to/peers.crt" \
+ --peer-key-file="/path/to/peers.key"
+
+storage:
+ files:
+ - path: /path/to/CA.pem
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN CERTIFICATE-----
+ MIIFNDCCAx6gAwIBAgIBATALBgkqhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw
+ ...snip...
+ EtHaxYQRy72yZrte6Ypw57xPRB8sw1DIYjr821Lw05DrLuBYcbyclg==
+ -----END CERTIFICATE-----
+ - path: /path/to/server.crt
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN CERTIFICATE-----
+ MIIFWTCCA0OgAwIBAgIBAjALBgkqhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw
+ DgYDVQQKEwdldGNkLWNhMQswCQYDVQQLEwJDQTAeFw0xNDA1MjEyMTQ0MjhaFw0y
+ ...snip...
+ rdmtCVLOyo2wz/UTzvo7UpuxRrnizBHpytE4u0KgifGp1OOKY+1Lx8XSH7jJIaZB
+ a3m12FMs3AsSt7mzyZk+bH2WjZLrlUXyrvprI40=
+ -----END CERTIFICATE-----
+ - path: /path/to/server.key
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN RSA PRIVATE KEY-----
+ Proc-Type: 4,ENCRYPTED
+ DEK-Info: DES-EDE3-CBC,069abc493cd8bda6
+
+ TBX9mCqvzNMWZN6YQKR2cFxYISFreNk5Q938s5YClnCWz3B6KfwCZtjMlbdqAakj
+ ...snip...
+ mgVh2LBerGMbsdsTQ268sDvHKTdD9MDAunZlQIgO2zotARY02MLV/Q5erASYdCxk
+ -----END RSA PRIVATE KEY-----
+ - path: /path/to/peers.crt
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN CERTIFICATE-----
+ VQQLEwJDQTAeFw0xNDA1MjEyMTQ0MjhaFw0yMIIFWTCCA0OgAwIBAgIBAjALBgkq
+ DgYDVQQKEwdldGNkLWNhMQswCQYDhkiG9w0BAQUwLTEMMAoGA1UEBhMDVVNBMRAw
+ ...snip...
+ BHpytE4u0KgifGp1OOKY+1Lx8XSH7jJIaZBrdmtCVLOyo2wz/UTzvo7UpuxRrniz
+ St7mza3m12FMs3AsyZk+bH2WjZLrlUXyrvprI90=
+ -----END CERTIFICATE-----
+ - path: /path/to/peers.key
+ mode: 0644
+ contents:
+ inline: |
+ -----BEGIN RSA PRIVATE KEY-----
+ Proc-Type: 4,ENCRYPTED
+ DEK-Info: DES-EDE3-CBC,069abc493cd8bda6
+
+ SFreNk5Q938s5YTBX9mCqvzNMWZN6YQKR2cFxYIClnCWz3B6KfwCZtjMlbdqAakj
+ ...snip...
+ DvHKTdD9MDAunZlQIgO2zotmgVh2LBerGMbsdsTQ268sARY02MLV/Q5erASYdCxk
+ -----END RSA PRIVATE KEY-----
+```
+
+[self-signed-howto]: ../security/generate-self-signed-certificates
diff --git a/content/docs/latest/setup/customization/network-config-with-networkd.md b/content/docs/latest/setup/customization/network-config-with-networkd.md
new file mode 100644
index 00000000..6a78f1c6
--- /dev/null
+++ b/content/docs/latest/setup/customization/network-config-with-networkd.md
@@ -0,0 +1,229 @@
+---
+title: Network configuration with networkd
+description: How to setup static networking, turn on/off ipv4/ipv6, and debugging tips.
+weight: 40
+aliases:
+ - ../../os/network-config-with-networkd
+ - ../../clusters/customization/network-config-with-networkd
+---
+
+Flatcar Container Linux machines are preconfigured with [networking customized][notes-for-distributors] for each platform. You can write your own networkd units to replace or override the units created for each platform. This article covers a subset of networkd functionality. You can view the [full docs here](http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html).
+
+Drop a networkd unit in `/etc/systemd/network/` or inject a unit on boot via a Butane Config. Files placed manually on the filesystem will need to reload networkd afterwards with `sudo systemctl restart systemd-networkd`. Network units injected via a Butane Config will be written to the system before networkd is started, so there are no work-arounds needed.
+
+Let's take a look at two common situations: using a static IP and turning off DHCP.
+
+## Static networking
+
+To configure a static IP on `enp2s0`, create `static.network`:
+
+```ini
+[Match]
+Name=enp2s0
+
+[Network]
+Address=192.168.0.15/24
+Gateway=192.168.0.1
+DNS=1.2.3.4
+```
+
+Place the file in `/etc/systemd/network/`. To apply the configuration, run:
+
+```shell
+sudo systemctl restart systemd-networkd
+```
+
+### Butane Config
+
+Setting up static networking in your Butane Config can be done by writing out the network unit. Be sure to modify the `[Match]` section with the name of your desired interface, and replace the IPs:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/network/00-eth0.network
+ contents:
+ inline: |
+ [Match]
+ Name=eth0
+
+ [Network]
+ DNS=1.2.3.4
+ Address=10.0.0.101/24
+ Gateway=10.0.0.1
+```
+
+## Turn off DHCP on specific interface
+
+If you'd like to use DHCP on all interfaces except `enp2s0`, create two files. They'll be checked in lexical order, as described in the [full network docs](http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html). Any interfaces matching during earlier files will be ignored during later files.
+
+`10-static.network`:
+
+```ini
+[Match]
+Name=enp2s0
+
+[Network]
+Address=192.168.0.15/24
+Gateway=192.168.0.1
+DNS=1.2.3.4
+```
+
+Put your settings-of-last-resort in `20-dhcp.network`. For example, any interfaces matching `en*` that weren't matched in `10-static.network` will be configured with DHCP:
+
+`20-dhcp.network`:
+
+```ini
+[Match]
+Name=en*
+
+[Network]
+DHCP=yes
+```
+
+To apply the configuration, run `sudo systemctl restart systemd-networkd`. Check the status with `systemctl status systemd-networkd` and read the full log with `journalctl -u systemd-networkd`.
+
+## Turn off IPv6 on specific interfaces
+
+While IPv6 can be disabled globally at boot by appending `ipv6.disable=1` to the kernel command line, networkd supports disabling IPv6 on a per-interface basis. When a network unit's `[Network]` section has either `LinkLocalAddressing=ipv4` or `LinkLocalAddressing=no`, networkd will not try to configure IPv6 on the matching interfaces.
+
+Note however that even when using the above option, networkd will still be expecting to receive router advertisements if IPv6 is not disabled globally. If IPv6 traffic is not being received by the interface (e.g. due to `sysctl` or `ip6tables` settings), it will remain in the `configuring` state and potentially cause timeouts for services waiting for the network to be fully configured. To avoid this, the `IPv6AcceptRA=no` option should also be set in the `[Network]` section.
+
+A network unit file's `[Network]` section should therefore contain the following to disable IPv6 on its matching interfaces.
+
+```ini
+[Network]
+LinkLocalAddressing=no
+IPv6AcceptRA=no
+```
+
+## Configure static routes
+
+Specify static routes in a systemd network unit's `[Route]` section. In this example, we create a unit file, `10-static.network`, and define in it a static route to the `172.16.0.0/24` subnet:
+
+`10-static.network`:
+
+```ini
+[Route]
+Gateway=192.168.122.1
+Destination=172.16.0.0/24
+```
+
+To specify the same route in a Butane Config, create the systemd network unit there instead:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/network/10-static.network
+ contents:
+ inline: |
+ [Route]
+ Gateway=192.168.122.1
+ Destination=172.16.0.0/24
+```
+
+## Configure multiple IP addresses
+
+To configure multiple IP addresses on one interface, we define multiple `Address` keys in the network unit. In the example below, we've also defined a different gateway for each IP address.
+
+`20-multi_ip.network`:
+
+```ini
+[Match]
+Name=eth0
+
+[Network]
+DNS=8.8.8.8
+Address=10.0.0.101/24
+Gateway=10.0.0.1
+Address=10.0.1.101/24
+Gateway=10.0.1.1
+```
+
+To do the same thing through a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/network/20-multi_ip.network
+ contents:
+ inline: |
+ [Match]
+ Name=eth0
+
+ [Network]
+ DNS=8.8.8.8
+ Address=10.0.0.101/24
+ Gateway=10.0.0.1
+ Address=10.0.1.101/24
+ Gateway=10.0.1.1
+```
+
+To verify whether your configuration was successful and view all IP addresses associated with a specific interface, you can use the following syntax: `ip [-4|-6] addr show dev `. Here is an example of the command and its output:
+
+```
+$ ip -4 addr show dev eth0
+3: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
+ valid_lft forever preferred_lft forever
+ inet 10.0.1.101/24 brd 10.0.1.255 scope global secondary eth0
+ valid_lft forever preferred_lft forever
+ ```
+By executing the command `ip -4 addr show dev `, you can obtain detailed information about the eth0 interface. The output includes the interface's state, such as whether it is UP or DOWN, its assigned IP addresses, the corresponding subnet masks, and other relevant details.
+
+## Debugging networkd
+
+If you've faced some problems with networkd you can enable debug mode following the instructions below.
+
+### Enable debugging manually
+
+```shell
+mkdir -p /etc/systemd/system/systemd-networkd.service.d/
+```
+
+Create a [Drop-In][drop-ins] `/etc/systemd/system/systemd-networkd.service.d/10-debug.conf` with following content:
+
+```shell
+[Service]
+Environment=SYSTEMD_LOG_LEVEL=debug
+```
+
+And restart `systemd-networkd` service:
+
+```shell
+systemctl daemon-reload
+systemctl restart systemd-networkd
+journalctl -b -u systemd-networkd
+```
+
+### Enable debugging through a Butane Config
+
+Define a [Drop-In][drop-ins] in a [Butane Linux Config][butane-configs]:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: systemd-networkd.service
+ dropins:
+ - name: 10-debug.conf
+ contents: |
+ [Service]
+ Environment=SYSTEMD_LOG_LEVEL=debug
+```
+
+## Further reading
+
+- [networkd full documentation](http://www.freedesktop.org/software/systemd/man/systemd-networkd.service.html)
+- [Getting Started with systemd](../systemd/getting-started)
+- [Reading the System Log](../debug/reading-the-system-log)
+
+[butane-configs]: ../../provisioning/config-transpiler
+[drop-ins]: ../systemd/drop-in-units
+[notes-for-distributors]: ../../installing/community-platforms/notes-for-distributors
diff --git a/content/docs/latest/setup/customization/other-settings.md b/content/docs/latest/setup/customization/other-settings.md
new file mode 100644
index 00000000..9a46828b
--- /dev/null
+++ b/content/docs/latest/setup/customization/other-settings.md
@@ -0,0 +1,236 @@
+---
+title: Kernel modules and other settings
+description: How to configure kernel modules, sysctl parameters, and other common Flatcar settings.
+weight: 30
+aliases:
+ - ../../os/other-settings
+ - ../../clusters/customization/other-settings
+---
+
+## Loading kernel modules
+
+Most Linux kernel modules get automatically loaded as-needed but there are a some situations where this doesn't work. Problems can arise if there is boot-time dependencies are sensitive to exactly when the module gets loaded. Module auto-loading can be broken all-together if the operation requiring the module happens inside of a container. `iptables` and other netfilter features can easily encounter both of these issues. To force a module to be loaded early during boot simply list them in a file under `/etc/modules-load.d`. The file name must end in `.conf`.
+
+```shell
+echo nf_conntrack > /etc/modules-load.d/nf.conf
+```
+
+Or, using a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/modules-load.d/nf.conf
+ mode: 0644
+ contents:
+ inline: nf_conntrack
+```
+
+### Loading kernel modules with options
+
+The following section demonstrates how to provide module options when loading. After these configs are processed, the dummy module is loaded into the kernel, and five dummy interfaces are added to the network stack.
+
+Further details can be found in the systemd man pages:
+[modules-load.d(5)](http://www.freedesktop.org/software/systemd/man/modules-load.d.html)
+[systemd-modules-load.service(8)](http://www.freedesktop.org/software/systemd/man/systemd-modules-load.service.html)
+[modprobe.d(5)](http://linux.die.net/man/5/modprobe.d)
+
+This example Butane Config loads the `dummy` network interface module with an option specifying the number of interfaces the module should create when loaded (`numdummies=5`):
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/modprobe.d/dummy.conf
+ mode: 0644
+ contents:
+ inline: options dummy numdummies=5
+ - path: /etc/modules-load.d/dummy.conf
+ mode: 0644
+ contents:
+ inline: dummy
+```
+
+## Tuning sysctl parameters
+
+The Linux kernel offers a plethora of knobs under `/proc/sys` to control the availability of different features and tune performance parameters. For one-shot changes values can be written directly to the files under `/proc/sys` but persistent settings must be written to `/etc/sysctl.d`:
+
+```shell
+echo net.netfilter.nf_conntrack_max=131072 > /etc/sysctl.d/nf.conf
+sysctl --system
+```
+
+Some parameters, such as the conntrack one above, are only available after the module they control has been loaded. To ensure any modules are loaded in advance use `modules-load.d` as described above. A complete Butane Config using both would look like:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/modules-load.d/nf.conf
+ mode: 0644
+ contents:
+ inline: |
+ nf_conntrack
+ - path: /etc/sysctl.d/nf.conf
+ mode: 0644
+ contents:
+ inline: |
+ net.netfilter.nf_conntrack_max=131072
+```
+
+Further details can be found in the systemd man pages:
+[sysctl.d(5)](http://www.freedesktop.org/software/systemd/man/sysctl.d.html)
+[systemd-sysctl.service(8)](http://www.freedesktop.org/software/systemd/man/systemd-sysctl.service.html)
+
+## Adding custom kernel boot options
+
+The Flatcar Container Linux bootloader parses the configuration file `/usr/share/oem/grub.cfg`, where custom kernel boot options may be set.
+
+The `/usr/share/oem/grub.cfg` file can be configured with Ignition. Beginning with Flatcar major version 3185 the `kernelArguments` directive in Ignition v3 allows to add or remove kernel command line parameters and reboot the system directly from the initramfs to apply them as part of the first boot setup.
+It only works for unconditional `set linux_append` statements in `grub.cfg` and any existing `linux_console` statement is not considered.
+
+Here's an example for ensuring that `flatcar.autologin` exists while ensuring that `quiet` does not exist.
+First the Butane YAML config and then the transpiled Ignition v3 config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+kernel_arguments:
+ should_exist:
+ - flatcar.autologin
+ should_not_exist:
+ - quiet
+```
+
+```
+{
+ "ignition": {
+ "version": "3.3.0"
+ },
+ "kernelArguments": {
+ "shouldExist": [
+ "flatcar.autologin"
+ ],
+ "shouldNotExist": [
+ "quiet"
+ ]
+ }
+}
+```
+
+Instead of using `kernelArguments` you can also use the plain file directive in Ignition to write to `/usr/share/oem/grub.cfg`.
+However, because Ignition runs after GRUB, the GRUB configuration won't take effect until the next reboot of the node. This is particularly
+useful if you are bound to use Ignition V2 (which requires the use of `ct` instead of `butane`).
+
+Here's an example Container Linux Configuration for using the plain file directive (this YAML content has to be transpiled to Ignition JSON with `ct`):
+
+```yaml
+storage:
+ filesystems:
+ - name: "OEM"
+ mount:
+ device: "/dev/disk/by-label/OEM"
+ format: "btrfs"
+ files:
+ - filesystem: "OEM"
+ path: "/grub.cfg"
+ mode: 0644
+ append: true
+ contents:
+ inline: |
+ set linux_append="$linux_append flatcar.autologin=tty1"
+```
+
+To take effect directly on first boot, the alternative is to create a `getty@.service` drop-in, here a snippet that will work with `ct` and `butane`:
+
+```
+systemd:
+ units:
+ - name: getty@.service
+ dropins:
+ - name: 10-autologin.conf
+ contents: |
+ [Service]
+ ExecStart=
+ ExecStart=-/sbin/agetty --noclear %I $TERM
+```
+
+### Enable Flatcar Container Linux autologin
+
+To login without a password for the `core` user on the serial or VGA console on every boot, edit `/usr/share/oem/grub.cfg` to add a line like this:
+
+```text
+set linux_append="$linux_append flatcar.autologin=tty1"
+```
+
+Without specifying `=tty1` any TTY will be used, e.g., the serial console.
+
+To control this setting on provisioning time, use the Ignition v3 `kernelArguments` directive with `shouldExist` or `shouldNotExist` (see the Butane config in the section above).
+
+### Enable systemd debug logging
+
+Edit `/usr/share/oem/grub.cfg` to add the following line, enabling systemd's most verbose `debug`-level logging:
+
+```text
+set linux_append="$linux_append systemd.log_level=debug"
+```
+
+### Mask a systemd unit
+
+Completely disable the `systemd-networkd.service` unit by adding this line to `/usr/share/oem/grub.cfg`:
+
+```text
+set linux_append="$linux_append systemd.mask=systemd-networkd.service"
+```
+
+## Adding custom messages to MOTD
+
+When logging in interactively, a brief message (the "Message of the Day (MOTD)") reports the Flatcar Container Linux release channel, version, and a list of any services or systemd units that have failed. Additional text can be added by dropping text files into `/etc/motd.d`. The directory may need to be created first, and the drop-in file name must end in `.conf`. Flatcar Container Linux versions 555.0.0 and greater support customization of the MOTD.
+
+```shell
+mkdir -p /etc/motd.d
+echo "This machine is dedicated to computing Pi" > /etc/motd.d/pi.conf
+```
+
+Or via a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/motd.d/pi.conf
+ mode: 0644
+ contents:
+ inline: This machine is dedicated to computing Pi
+```
+
+## Prevent login prompts from clearing the console
+
+The system boot messages that are printed to the console will be cleared when systemd starts a login prompt. In order to preserve these messages, the `getty` services will need to have their `TTYVTDisallocate` setting disabled. This can be achieved with a drop-in for the template unit, `getty@.service`. Note that the console will still scroll so the login prompt is at the top of the screen, but the boot messages will be available by scrolling.
+
+```shell
+mkdir -p '/etc/systemd/system/getty@.service.d'
+echo -e '[Service]\nTTYVTDisallocate=no' > '/etc/systemd/system/getty@.service.d/no-disallocate.conf'
+```
+
+Or via a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: getty@.service
+ dropins:
+ - name: no-disallocate.conf
+ contents: |
+ [Service]
+ TTYVTDisallocate=no
+```
+
+When the `TTYVTDisallocate` setting is disabled, the console scrollback is not cleared on logout, not even by the `clear` command in the default `.bash_logout` file. Scrollback must be cleared explicitly, e.g. by running `echo -en '\033[3J' > /dev/console` as the root user.
diff --git a/content/docs/latest/setup/customization/power-management.md b/content/docs/latest/setup/customization/power-management.md
new file mode 100644
index 00000000..117539e2
--- /dev/null
+++ b/content/docs/latest/setup/customization/power-management.md
@@ -0,0 +1,57 @@
+---
+title: Tuning Flatcar Container Linux power management
+linktitle: Power Management
+description: How to choose the CPU governor to use.
+weight: 50
+aliases:
+ - ../../os/power-management
+ - ../../clusters/scaling/power-management
+---
+
+## CPU governor
+
+By default, Flatcar Container Linux uses the "performance" CPU governor meaning that the CPU operates at the maximum frequency regardless of load. This is reasonable for a system that is under constant load or cannot tolerate increased latency. On the other hand, if the system is idle much of the time and latency is not a concern, power savings may be desired.
+
+Several governors are available:
+
+| Governor | Description |
+|----------------|-----------------------------------------------------------------------|
+| `performance` | Default. Operate at the maximum frequency |
+| `ondemand` | Dynamically scale frequency at 75% cpu load |
+| `conservative` | Dynamically scale frequency at 95% cpu load |
+| `powersave` | Operate at the minimum frequency |
+| `userspace` | Controlled by a userspace application via the `scaling_setspeed` file |
+
+The "conservative" governor can be used instead using the following shell commands:
+
+```shell
+modprobe cpufreq_conservative
+echo "conservative" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor > /dev/null
+```
+
+This can be configured with a [Butane Config][butane-configs] as well:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: cpu-governor.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Enable CPU power saving
+
+ [Service]
+ Type=oneshot
+ RemainAfterExit=yes
+ ExecStart=/usr/sbin/modprobe cpufreq_conservative
+ ExecStart=/usr/bin/sh -c '/usr/bin/echo "conservative" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor'
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+More information on further tuning each governor is available in the [Kernel Documentation](https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt)
+
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/setup/customization/using-nvidia.md b/content/docs/latest/setup/customization/using-nvidia.md
new file mode 100644
index 00000000..c4f95667
--- /dev/null
+++ b/content/docs/latest/setup/customization/using-nvidia.md
@@ -0,0 +1,48 @@
+---
+title: Using NVIDIA GPUs on Flatcar
+description: How to use and customize the NVIDIA driver on Flatcar
+weight: 30
+---
+
+### Installation
+
+Flatcar Container Linux offers support for the installation and customization of NVIDIA drivers for Tesla GPUs (both in VMs and on bare metal). Please take note that NVIDIA drivers have been migrated from being solely available on AWS and Azure to being accessible on all platforms with the release of version 3637.0.0. If you are using an older version, please be aware that it is restricted to AWS and Azure only.
+
+During the initial boot, the `nvidia.service` automates hardware detection and triggers driver installation within a dedicated Flatcar developer container, ensuring a streamlined process. The current version of the installed NVIDIA driver can be found in the `/usr/share/flatcar/nvidia-metadata` file, assuming it's a vanilla installation and the version hasn't been customized (see below).
+
+It's important to note that Flatcar Container Linux adheres strictly to NVIDIA's distribution terms, and therefore does not include pre-installed support for NVIDIA drivers. However, Flatcar simplifies the installation process by seamlessly integrating it into the first boot experience. This approach allows users to quickly and effortlessly set up the NVIDIA driver environment, aligning with NVIDIA's guidelines for driver distribution.
+
+Since the installation is triggered after boot, the overall installation time is around 5-10 minutes. To check the installation status, use the following command:
+
+```
+# journalctl -u nvidia -f
+```
+
+Once the installation is complete, you will have access to various NVIDIA commands. To verify the installation, run the command:
+
+```
+# nvidia-smi
+```
+
+### Customization
+
+To customize the version number of the NVIDIA driver, you can override the value in the `/etc/flatcar/nvidia-metadata` file by specifying the desired version in the `NVIDIA_DRIVER_VERSION`. However, it's important to ensure that the chosen driver version is compatible with the GPU hardware present in the instance.
+E.g., for older GPUs the 460 driver series is needed because the latest drivers dropped support for them.
+
+```
+echo "NVIDIA_DRIVER_VERSION=460.106.00" | sudo tee /etc/flatcar/nvidia-metadata
+sudo systemctl restart nvidia
+```
+
+**Butane Config**
+
+```
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/flatcar/nvidia-metadata
+ mode: 0644
+ contents:
+ inline: |
+ NVIDIA_DRIVER_VERSION=460.106.00
diff --git a/content/docs/latest/setup/debug/_index.md b/content/docs/latest/setup/debug/_index.md
new file mode 100644
index 00000000..52e73b6c
--- /dev/null
+++ b/content/docs/latest/setup/debug/_index.md
@@ -0,0 +1,7 @@
+---
+title: Debugging
+description: >
+ Useful tools and techniques to understand what's going on inside a
+ Flatcar instance when things don't work as expected.
+weight: 50
+---
diff --git a/content/docs/latest/setup/debug/btrfs-troubleshooting.md b/content/docs/latest/setup/debug/btrfs-troubleshooting.md
new file mode 100644
index 00000000..2dbd8a58
--- /dev/null
+++ b/content/docs/latest/setup/debug/btrfs-troubleshooting.md
@@ -0,0 +1,136 @@
+---
+title: Working with btrfs and common troubleshooting
+linktitle: Troubleshooting btrfs
+description: Tips and tricks for solving issues related to btrfs on Flatcar.
+weight: 30
+aliases:
+ - ../../os/btrfs-troubleshooting
+ - ../../clusters/debug/btrfs-troubleshooting
+---
+
+btrfs is a copy-on-write filesystem with full support in the upstream Linux kernel and several desirable features. In the past, Flatcar Container Linux shipped with a btrfs root filesystem to support Docker filesystem requirements at the time. As of version 561.0.0, Flatcar Container Linux ships with ext4 as the default root filesystem by default while still supporting Docker. Btrfs is still supported and works with the latest Flatcar Container Linux releases and Docker, but we recommend using ext4.
+
+btrfs was marked as experimental for a long time, but it's now fully production-ready and supported by a number of Linux distributions.
+
+Notable Features of btrfs:
+
+- Ability to add/remove block devices without interruption
+- Ability to balance the filesystem without interruption
+- RAID 0, RAID 1, RAID 5, RAID 6 and RAID 10
+- Snapshots and file cloning
+
+This guide won't cover these topics — it's mostly focused on troubleshooting.
+
+For a more complete troubleshooting experience, let's explore how btrfs works under the hood.
+
+btrfs stores data in chunks across all of the block devices on the system. The total storage across these devices is shown in the standard output of `df -h`.
+
+Raw data and filesystem metadata are stored in one or many chunks, typically ~1GiB in size. When RAID is configured, these chunks are replicated instead of individual files.
+
+A copy-on-write filesystem maintains many changes of a single file, which is helpful for snapshotting and other advanced features, but can lead to fragmentation with some workloads.
+
+## No space left on device
+
+When the filesystem is out of chunks to write data into, `No space left on device` will be reported. This will prevent journal files from being recorded, containers from starting and so on.
+
+The common reaction to this error is to run `df -h` and you'll see that there is still some free space. That command isn't measuring the btrfs primitives (chunks, metadata, etc), which is what really matters.
+
+Running `sudo btrfs fi show` will give you the btrfs view of how much free space you have. When starting/stopping many Docker containers or doing a large amount of random writes, chunks will become duplicated in an inefficient manner over time.
+
+Re-balancing the filesystem ([official btrfs docs](https://btrfs.wiki.kernel.org/index.php/Balance_Filters)) will relocate data from empty or near-empty chunks to free up space. This operation can be done without downtime.
+
+First, let's see how much free space we have:
+
+```shell
+$ sudo btrfs fi show
+Label: 'ROOT' uuid: 82a40c46-557e-4848-ad4d-10c6e36ed5ad
+ Total devices 1 FS bytes used 13.44GiB
+ devid 1 size 32.68GiB used 32.68GiB path /dev/xvda9
+
+Btrfs v3.14_pre20140414
+```
+
+The answer: not a lot. We can re-balance to fix that.
+
+The re-balance command can be configured to only relocate data in chunks up to a certain percentage used. This will prevent you from moving around a lot of data without a lot of benefit. If your disk is completely full, you may need to delete a few containers to create space for the re-balance operation to work with.
+
+Let's try to relocate chunks with less than 5% of usage:
+
+```shell
+$ sudo btrfs fi balance start -dusage=5 /
+Done, had to relocate 5 out of 45 chunks
+$ sudo btrfs fi show
+Label: 'ROOT' uuid: 82a40c46-557e-4848-ad4d-10c6e36ed5ad
+ Total devices 1 FS bytes used 13.39GiB
+ devid 1 size 32.68GiB used 28.93GiB path /dev/xvda9
+
+Btrfs v3.14_pre20140414
+```
+
+The operation took about a minute on a cloud server and gained us 4GiB of space on the filesystem. It's up to you to find out what percentage works best for your workload, the speed of your disks, etc.
+
+If your balance operation is taking a long time, you can open a new shell and find the status:
+
+```shell
+$ sudo btrfs balance status /
+Balance on '/' is running
+0 out of about 1 chunks balanced (1 considered), 100% left
+```
+
+## Adding a new physical disk
+
+New physical disks can be added to an existing btrfs filesystem. The first step is to have the new block device [mounted on the machine](../storage/mounting-storage). Afterwards, let btrfs know about the new device and re-balance the file system. The key step here is re-balancing, which will move the data and metadata across both block devices. Expect this process to take some time:
+
+```shell
+btrfs device add /dev/sdc /
+btrfs filesystem balance /
+```
+
+## Disable copy-on-write
+
+Copy-On-write isn't ideal for workloads that create or modify many small files, such as databases. Without disabling COW, you can heavily fragment the file system as explained above.
+
+The best strategy for successfully running a database in a container is to disable COW on directory/volume that is mounted into the container.
+
+The COW setting is stored as a file attribute and is modified with a utility called `chattr`. To disable COW for a MySQL container's volume, run:
+
+```shell
+sudo mkdir /var/lib/mysql
+sudo chattr -R +C /var/lib/mysql
+```
+
+The directory `/var/lib/mysql` is now ready to be used by a Docker container without COW. Let's break down the command:
+
+`-R` indicates that want to recursively change the file attribute
+`+C` means we want to set the NOCOW attribute on the file/directory
+
+To verify, we can run:
+
+```shell
+$ sudo lsattr /var/lib/
+---------------- /var/lib/portage
+---------------- /var/lib/gentoo
+---------------- /var/lib/iptables
+---------------- /var/lib/ip6tables
+---------------- /var/lib/arpd
+---------------- /var/lib/ipset
+---------------- /var/lib/dbus
+---------------- /var/lib/systemd
+---------------- /var/lib/polkit-1
+---------------- /var/lib/dhcpcd
+---------------- /var/lib/ntp
+---------------- /var/lib/nfs
+---------------- /var/lib/etcd
+---------------- /var/lib/docker
+---------------- /var/lib/update_engine
+---------------C /var/lib/mysql
+```
+
+### Disable via a unit file
+
+Setting the file attributes can be done via a systemd unit using two `ExecStartPre` commands:
+
+```ini
+ExecStartPre=/usr/bin/mkdir -p /var/lib/mysql
+ExecStartPre=/usr/bin/chattr -R +C /var/lib/mysql
+```
diff --git a/content/docs/latest/setup/debug/collecting-crash-logs.md b/content/docs/latest/setup/debug/collecting-crash-logs.md
new file mode 100644
index 00000000..7ac557d2
--- /dev/null
+++ b/content/docs/latest/setup/debug/collecting-crash-logs.md
@@ -0,0 +1,107 @@
+---
+title: Collecting crash logs on Flatcar Container Linux
+linktitle: Collecting crash logs
+description: How to use pstore to access crash logs.
+weight: 10
+aliases:
+ - ../../os/collecting-crash-logs
+ - ../../clusters/debug/collecting-crash-logs
+---
+
+In the unfortunate case that an OS crashes, it's often extremely helpful to gather information about the event. There are two popular tools used to accomplished this goal: kdump and pstore. Flatcar Container Linux relies on pstore, a persistent storage abstraction provided by the Linux kernel, to store logs in the event of a kernel panic. Since this mechanism is just an abstraction, it depends on hardware support to actually persist the data across reboots. If the hardware support is absent, the pstore will remain empty. On AMD64 machines, pstore is typically backed by the ACPI error record serialization table (ERST) or EFI variables.
+
+## Check if pstore support exists
+
+The content of `/sys/module/pstore/parameters/backend` tells whether a pstore backend exists.
+If it only contains `(null)`, the system has no pstore support and won't store the kernel logs there - you have to monitor the serial console or use kdump.
+
+## Using pstore
+
+On Flatcar Container Linux, the pstore is automatically mounted to `/sys/fs/pstore` but files available there get automatically moved to `/var/lib/systemd/pstore/` through `systemd-pstore.service` after boot. The contents of the store can be explored using standard filesystem tools:
+
+```shell
+ls /var/lib/systemd/pstore/
+```
+
+On this particular machine, there isn't anything in the pstore yet. In order to test the mechanism, a kernel panic can be triggered:
+
+```shell
+echo c > /proc/sysrq-trigger
+```
+
+Once the machine boots, the pstore can again be inspected:
+
+```shell
+$ ls /var/lib/systemd/pstore/
+dmesg-erst-6319986351055831041 dmesg-erst-6319986351055831044
+dmesg-erst-6319986351055831042 dmesg-erst-6319986351055831045
+dmesg-erst-6319986351055831043
+```
+
+Now there are a series of dmesg logs, stored in the ACPI ERST. Looking at the first file, the cause of the panic can be discovered:
+
+```shell
+$ cat /var/lib/systemd/pstore/dmesg-erst-6319986351055831041
+Oops#1 Part1
+...
+<6>[ 201.650687] sysrq: SysRq : Trigger a crash
+<1>[ 201.654822] BUG: unable to handle kernel NULL pointer dereference at (null)
+<1>[ 201.662670] IP: [] sysrq_handle_crash+0x16/0x20
+<4>[ 201.668783] PGD 0
+<4>[ 201.670809] Oops: 0002 [#1] SMP
+<4>[ 201.673948] Modules linked in: coretemp sb_edac edac_core x86_pkg_temp_thermal kvm_intel ipmi_ssif kvm mei_me irqbypass i2c_i801 mousedev evdev mei ipmi_si ipmi_msghandler tpm_tis button tpm sch_fq_codel ip_tables hid_generic usbhid hid sd_mod squashfs loop igb ahci xhci_pci ehci_pci i2c_algo_bit libahci xhci_hcd ehci_hcd i2c_core libata i40e hwmon usbcore ptp crc32c_intel scsi_mod usb_common pps_core dm_mirror dm_region_hash dm_log dm_mod autofs4
+<4>[ 201.714354] CPU: 0 PID: 1899 Comm: bash Not tainted 4.7.0-coreos #1
+<4>[ 201.720612] Hardware name: Supermicro SYS-F618R3-FT/X10DRFF, BIOS 1.0b 01/07/2015
+<4>[ 201.728083] task: ffff881fdca79d40 ti: ffff881fd92d0000 task.ti: ffff881fd92d0000
+<4>[ 201.735553] RIP: 0010:[] [] sysrq_handle_crash+0x16/0x20
+<4>[ 201.744083] RSP: 0018:ffff881fd92d3d98 EFLAGS: 00010286
+<4>[ 201.749388] RAX: 000000000000000f RBX: 0000000000000063 RCX: 0000000000000000
+<4>[ 201.756511] RDX: 0000000000000000 RSI: ffff881fff80dbc8 RDI: 0000000000000063
+<4>[ 201.763635] RBP: ffff881fd92d3d98 R08: ffff88407ff57b80 R09: 00000000000000c2
+<4>[ 201.770759] R10: ffff881fe4fab624 R11: 00000000000005dd R12: 0000000000000007
+<4>[ 201.777885] R13: 0000000000000000 R14: ffffffffbdac37a0 R15: 0000000000000000
+<4>[ 201.785009] FS: 00007fa68acee700(0000) GS:ffff881fff800000(0000) knlGS:0000000000000000
+<4>[ 201.793085] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+<4>[ 201.798825] CR2: 0000000000000000 CR3: 0000001fdcc97000 CR4: 00000000001406f0
+<4>[ 201.805949] Stack:
+<4>[ 201.807961] ffff881fd92d3dc8 ffffffffbd3d2146 0000000000000002 fffffffffffffffb
+<4>[ 201.815413] 00007fa68acf6000 ffff883fe2e46f00 ffff881fd92d3de0 ffffffffbd3d259f
+<4>[ 201.822866] ffff881fe4fab5c0 ffff881fd92d3e00 ffffffffbd24fda8 ffff883fe2e46f00
+<4>[ 201.830320] Call Trace:
+<4>[ 201.832769] [] __handle_sysrq+0xf6/0x150
+<4>[ 201.838331] [] write_sysrq_trigger+0x2f/0x40
+<4>[ 201.844244] [] proc_reg_write+0x48/0x70
+<4>[ 201.849723] [] __vfs_write+0x37/0x140
+<4>[ 201.855038] [] ? security_file_permission+0x3d/0xc0
+<4>[ 201.861561] [] ? percpu_down_read+0x12/0x60
+<4>[ 201.867383] [] vfs_write+0xb8/0x1a0
+<4>[ 201.872514] [] SyS_write+0x55/0xc0
+<4>[ 201.877562] [] do_syscall_64+0x5d/0x150
+<4>[ 201.883047] [] entry_SYSCALL64_slow_path+0x25/0x25
+<4>[ 201.889474] Code: df ff 48 c7 c7 f3 a3 7d bd e8 47 c5 d3 ff e9 de fe ff ff 66 90 0f 1f 44 00 00 55 c7 05 48 b4 66 00 01 00 00 00 48 89 e5 0f ae f8 04 25 00 00 00 00 01 5d c3 0f 1f 44 00 00 55 31 c0 c7 05 5e
+<1>[ 201.909425] RIP [] sysrq_handle_crash+0x16/0x20
+<4>[ 201.915615] RSP
+<4>[ 201.919097] CR2: 0000000000000000
+<4>[ 201.922450] ---[ end trace 8794939ba0598b91 ]---
+```
+
+The cause of the panic was a system request! The remaining files in the pstore contain more of the logs leading up to the panic as well as more context. Each of the files has a small, descriptive header describing the source of the logs. Looking at each of the headers shows the rough structure of the logs:
+
+```shell
+$ head --lines=1 /var/lib/systemd/pstore/dmesg-erst-6319986351055831041
+Oops#1 Part1
+
+$ head --lines=1 /var/lib/systemd/pstore/dmesg-erst-6319986351055831042
+Oops#1 Part2
+
+$ head --lines=1 /var/lib/systemd/pstore/dmesg-erst-6319986351055831043
+Panic#2 Part1
+
+$ head --lines=1 /var/lib/systemd/pstore/dmesg-erst-6319986351055831044
+Panic#2 Part2
+
+$ head --lines=1 /var/lib/systemd/pstore/dmesg-erst-6319986351055831045
+Panic#2 Part3
+```
+
+It is important to note that the pstore typically has very limited storage space (on the order of kilobytes) and will not overwrite entries when out of space. Flatcar Container Linux relies on `systemd-pstore.service` to ensure maximal free space by moving the files from `/sys/fs/pstore/` to `/var/lib/systemd/pstore/` on each boot.
diff --git a/content/docs/latest/setup/debug/install-debugging-tools.md b/content/docs/latest/setup/debug/install-debugging-tools.md
new file mode 100644
index 00000000..9ef633be
--- /dev/null
+++ b/content/docs/latest/setup/debug/install-debugging-tools.md
@@ -0,0 +1,127 @@
+---
+title: Debugging tools on Flatcar Container Linux
+linktitle: Debugging tools
+description: How to use the Flatcar "toolbox" to debug problems.
+weight: 10
+aliases:
+ - ../../os/install-debugging-tools
+ - ../../clusters/debug/install-debugging-tools
+---
+
+You can use common debugging tools like tcpdump or strace with Toolbox. Using the filesystem of a specified Docker container Toolbox will launch a container with full system privileges including access to system PIDs, network interfaces and other global information. Inside of the toolbox, the machine's filesystem is mounted to `/media/root`.
+
+## Quick debugging
+
+By default, Toolbox uses the stock Fedora Docker container. To start using it, simply run:
+
+```shell
+/usr/bin/toolbox
+```
+
+_NOTE_: For Fedora, it's recommended to use at least 2048 MB RAM to avoid the following `dnf` operation being killed by the OOM manager.
+
+You're now in the namespace of Fedora and can install any software you'd like via `dnf`. For example, if you'd like to use `tcpdump`:
+
+```shell
+[root@srv-3qy0p ~]# dnf -y install tcpdump
+[root@srv-3qy0p ~]# tcpdump -i ens3
+tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
+listening on ens3, link-type EN10MB (Ethernet), capture size 65535 bytes
+```
+
+### Specify a custom Docker image
+
+Create a `.toolboxrc` in the user's home folder to use a specific Docker image:
+
+```shell
+$ cat .toolboxrc
+TOOLBOX_DOCKER_IMAGE=index.example.com/debug
+TOOLBOX_USER=root
+$ /usr/bin/toolbox
+Pulling repository index.example.com/debug
+...
+```
+
+You can also specify this in a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /home/core/.toolboxrc
+ mode: 0644
+ contents:
+ inline: |
+ TOOLBOX_DOCKER_IMAGE=index.example.com/debug
+ TOOLBOX_DOCKER_TAG=v1
+ TOOLBOX_USER=root
+```
+
+## Under the hood
+
+Behind the scenes, `toolbox` downloads, prepares and exports the container
+image you specify (or the default `fedora` image), then creates a container
+from that extracted image by calling `systemd-nspawn`. The exported
+image is retained in
+`/var/lib/toolbox/[username]-[image name]-[image tag]`, e.g. the default
+image run by the `core` user is at `/var/lib/toolbox/core-fedora-latest`.
+
+This means two important things:
+
+* Changes made inside the container will persist between sessions
+* The container filesystem will take up space on disk (a few hundred MiB
+for the default `fedora` container)
+
+## Spawn a toolbox with tmux in the background
+
+Since `toolbox` can only be started once it is not straightforward to use `tmux`
+for long-running jobs or sharing a debugging session with someone else.
+
+To keep user processes running in the background after logging out with SSH,
+you need to start them via `systemd-run` because _process lingering_ is disabled
+by default in logind and all non-service user processes are killed on logout.
+Spawn a user service to persist the toolbox container with the `tmux` process
+even when you log out with SSH.
+The following command line will ensure `tmux`, `strace` and `pidof` are installed
+in the container, then create a new `tmux` session to which you can later attach,
+and keep the service active by waiting with `strace` until the `tmux` process exits.
+
+```shell
+systemd-run --user toolbox sh -c 'dnf install -y tmux strace procps-ng; TERM=tmux tmux new-session -d -s sharedsession; strace -p "$(pidof tmux)"'
+```
+
+With `-d` we tell `tmux` to not allocate a TTY now (needed for `systemd-run`) but run a
+new session in the background.
+Because `tmux` forks away, we cannot use `wait` in the shell to wait for children but need
+to use `strace` to have a foreground process running that prevents `toolbox` from quitting.
+
+Once this is running you can can attach to the `tmux` session as often as you want from any SSH connection.
+
+```shell
+sudo nsenter -t "$(pidof tmux | cut -d ' ' -f 1)" -a tmux a
+```
+
+As usual with `tmux` you can attach and detach to the session as many times as you want because detaching
+still keeps `tmux` running in the background. But keep in mind that if you exit the session, the process
+started with `systemd-run` will terminate and you'll have to start the service again with `systemd-run`.
+
+## SSH directly into a toolbox
+
+Advanced users can SSH directly into a toolbox by setting up an `/etc/passwd` entry:
+
+```shell
+useradd bob -m -p '*' -s /usr/bin/toolbox -U -G sudo,docker,rkt
+```
+
+To test, SSH as bob:
+
+```shell
+ssh bob@hostname.example.com
+Flatcar Container Linux by Kinvolk alpha (2671.0.0)
+Downloading sha256:ee7e8933710 [=============================] 63.4 MB / 63.4 MB
+Spawning container bob-fedora-latest on /var/lib/toolbox/bob-fedora-latest.
+Press ^] three times within 1s to kill container.
+[root@srv-3qy0p ~]# dnf -y install emacs-nox
+[root@srv-3qy0p ~]# emacs /media/root/etc/systemd/system/newapp.service
+```
diff --git a/content/docs/latest/setup/debug/manual-rollbacks.md b/content/docs/latest/setup/debug/manual-rollbacks.md
new file mode 100644
index 00000000..5754b9b7
--- /dev/null
+++ b/content/docs/latest/setup/debug/manual-rollbacks.md
@@ -0,0 +1,218 @@
+---
+title: Performing manual Flatcar Container Linux rollbacks
+linktitle: Manual version rollbacks
+description: How to manually rollback to a previous Flatcar version.
+weight: 20
+aliases:
+ - ../../os/manual-rollbacks
+ - ../../clusters/debug/manual-rollbacks
+---
+
+In the event of an upgrade failure, Flatcar Container Linux will automatically boot with the version on the rollback partition. Immediately after an upgrade reboot, the active version of Flatcar Container Linux can be rolled back to the version installed on the rollback partition, or downgraded to the version current on any lower release channel. There is no method to downgrade to an arbitrary version number.
+
+This section describes the automated upgrade process, performing a manual rollback, and forcing a channel downgrade.
+
+**Note:** Neither performing a manual rollback nor forcing a channel downgrade are recommended.
+
+## Automated rollbacks
+
+The rollback to the previously installed version is done by GRUB and happens automatically if `update-engine` had no chance to mark the version as successful.
+This marking happens when the new version is booted and keeps running for around two minutes, at which point `update-engine` will mark the version as successful (how this works in detail is explained below).
+
+To extend the automatic rollback logic to cover your important systemd services, you could make them as requirement for the `update-engine.service`.
+
+Note that `update-engine` will still try to update which can cause a loop with disruptions due to the reboots.
+You can disable automatic updates by setting `SERVER=disabled` in `/etc/flatcar/update.conf`.
+
+## Rollback with `flatcar-update`
+
+While you can rollback to the previously installed version manually with the rest of this guide, you can also install any version to the inactive partition with the `flatcar-update` tool.
+To rollback to a known-good version, run it as follows:
+
+```shell
+$ sudo flatcar-update --to-version 2905.2.6 --disable-afterwards
+```
+
+The `--disable-afterwards` switch writes `SERVER=disabled` to `/etc/flatcar/update.conf` which disables updates.
+This ensures that you will stay on the version you specified.
+
+## How do updates work
+
+The system's GPT tables are used to encode which partition is currently active and which is passive. This can be seen using the `cgpt` command.
+
+```shell
+$ cgpt show /dev/sda
+ start size part contents
+ 0 1 Hybrid MBR
+ 1 1 Pri GPT header
+ 2 32 Pri GPT table
+ 4096 262144 1 Label: "EFI-SYSTEM"
+ Type: EFI System Partition
+ UUID: 596FF08E-5617-4497-B10B-27A23F658B73
+ Attr: Legacy BIOS Bootable
+ 266240 4096 2 Label: "BIOS-BOOT"
+ Type: BIOS Boot Partition
+ UUID: EACCC3D5-E7E9-461D-A6E2-1DCDAE4671EC
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for flatcar-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=2 tries=0 successful=1
+ 2367488 2097152 4 Label: "USR-B"
+ Type: Alias for flatcar-rootfs
+ UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
+ Attr: priority=1 tries=0 successful=0
+ 4464640 262144 6 Label: "OEM"
+ Type: Alias for linux-data
+ UUID: 726E33FA-DFE9-45B2-B215-FB35CD9C2388
+ 4726784 131072 7 Label: "OEM-CONFIG"
+ Type: Flatcar Container Linux reserved
+ UUID: 8F39CE8B-1FB3-4E7E-A784-0C53C8F40442
+ 4857856 37085151 9 Label: "ROOT"
+ Type: Flatcar Container Linux auto-resize
+ UUID: D9A972BB-8084-4AB5-BA55-F8A3AFFAD70D
+ 41943007 32 Sec GPT table
+ 41943039 1 Sec GPT header
+```
+
+Looking specifically at "USR-A" and "USR-B", we see that "USR-A" is the active USR partition (this is what's actually mounted at /usr; you can verify this with `rootdev -s /usr`). Its priority is higher than that of "USR-B". When the system boots, GRUB (the bootloader) looks at the priorities, tries, and successful flags to determine which partition to use.
+
+```shell
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for flatcar-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=2 tries=0 successful=1
+ 2367488 2097152 4 Label: "USR-B"
+ Type: Alias for flatcar-rootfs
+ UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
+ Attr: priority=1 tries=0 successful=0
+```
+
+You'll notice that on this machine, "USR-B" hasn't actually successfully booted. Not to worry! This is a fresh machine that hasn't been through an update cycle yet. When the machine downloads an update, the partition table is updated to allow the newer image to boot.
+
+```shell
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for flatcar-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=1 tries=0 successful=1
+ 2367488 2097152 4 Label: "USR-B"
+ Type: Alias for flatcar-rootfs
+ UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
+ Attr: priority=2 tries=1 successful=0
+```
+
+In this case, we see that "USR-B" now has a higher priority and it has one try to successfully boot. Once the machine reboots, the partition table will again be updated.
+
+```shell
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for flatcar-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=1 tries=0 successful=1
+ 2367488 2097152 4 Label: "USR-B"
+ Type: Alias for flatcar-rootfs
+ UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
+ Attr: priority=2 tries=0 successful=0
+```
+
+Now we see that the number of tries for "USR-B" has been decremented to zero. The successful flag still hasn't been updated though. Once update-engine has had a chance to run, it marks the boot as being successful.
+
+```shell
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for flatcar-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=1 tries=0 successful=1
+ 2367488 2097152 4 Label: "USR-B"
+ Type: Alias for flatcar-rootfs
+ UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
+ Attr: priority=2 tries=0 successful=1
+```
+
+**Note:** You may also see `Alias for coreos-rootfs` shown for the `/usr` partition instead of the `flatcar-rootfs`. To refer to them you can use both names or the more appropriate `flatcar-usr` name which we will use from now on.
+
+## Performing a manual rollback
+
+So, now that we understand what happens when the machine updates, we can tweak the process so that it boots an older image (assuming it's still intact on the passive partition). The first command we'll use is `cgpt find -t flatcar-usr`. This will give us a list of all of the USR partitions available on the disk.
+
+```shell
+$ cgpt find -t flatcar-usr
+/dev/sda3
+/dev/sda4
+```
+
+To figure out which partition is currently active, we can use `rootdev`.
+
+```shell
+$ rootdev -s /usr
+/dev/sda4
+```
+
+So now we know that `/dev/sda3` is the passive partition on our system. We can compose the previous two commands to dynamically figure out the passive partition.
+
+```shell
+$ cgpt find -t flatcar-usr | grep --invert-match "$(rootdev -s /usr)"
+/dev/sda3
+```
+
+In order to rollback, we need to mark that partition as active using `cgpt prioritize`.
+
+```shell
+cgpt prioritize /dev/sda3
+```
+
+If we take another look at the GPT tables, we'll see that the priorities have been updated.
+
+```shell
+ 270336 2097152 3 Label: "USR-A"
+ Type: Alias for flatcar-rootfs
+ UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
+ Attr: priority=2 tries=0 successful=1
+ 2367488 2097152 4 Label: "USR-B"
+ Type: Alias for flatcar-rootfs
+ UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
+ Attr: priority=1 tries=0 successful=1
+
+```
+
+Composing the previous two commands produces the following command to set the currently passive partition to be active on the next boot:
+
+```shell
+cgpt prioritize "$(cgpt find -t flatcar-usr | grep --invert-match "$(rootdev -s /usr)")"
+```
+
+In the above scenario, _tries_ can stay 0 because the partition was marked as _successful_.
+If the partition was not successfully booted, we also need to set the available _tries_ to 1 again:
+
+```shell
+cgpt add -T 1 /dev/sda3
+```
+
+## Forcing a Channel Downgrade
+
+The procedure above restores the last known good Flatcar Container Linux version from immediately before an upgrade reboot. The system remains on the same [Flatcar Container Linux channel][relchans] after rebooting with the previous USR partition. It is also possible, though not recommended, to switch a Flatcar Container Linux installation to an older release channel, for example to make a system running an Alpha release downgrade to the Stable channel. Root privileges are required for this procedure, noted by `sudo` in the commands below.
+
+First, edit `/etc/coreos/update.conf` to set `GROUP` to the name of the target channel, one of `stable` or `beta`:
+
+```ini
+GROUP=stable
+```
+
+Next, clear the current version number from the `release` file so that the target channel will be certain to have a higher version number, triggering the "upgrade," in this case a downgrade to the lower channel. Since `release` is on a read-only file system, it is convenient to temporarily override it with a bind mount. To do this, copy the original to a writable location, then bind the copy over the system `release` file:
+
+```shell
+cp /usr/share/coreos/release /tmp
+sudo mount -o bind /tmp/release /usr/share/coreos/release
+```
+
+The file is now writable, but the bind mount will not survive the reboot, so that the default read-only system `release` file will be restored after this procedure is complete. Edit `/usr/share/coreos/release` to replace the value of `COREOS_RELEASE_VERSION` with `0.0.0`:
+
+```ini
+COREOS_RELEASE_VERSION=0.0.0
+```
+
+Restart the update service so that it rescans the edited configuration, then initiate an update. The system will reboot into the selected lower channel after downloading the release:
+
+```shell
+update_engine_client -update
+```
+
+
+[relchans]: ../releases/switching-channels
diff --git a/content/docs/latest/setup/debug/reading-the-system-log.md b/content/docs/latest/setup/debug/reading-the-system-log.md
new file mode 100644
index 00000000..5220d6e0
--- /dev/null
+++ b/content/docs/latest/setup/debug/reading-the-system-log.md
@@ -0,0 +1,138 @@
+---
+title: Reading the system log
+description: How to use journalctl to understand what's going on.
+weight: 15
+aliases:
+ - ../../os/reading-the-system-log
+---
+
+`journalctl` is your interface into a single machine's journal/logging. All service files insert data into the systemd journal. There are a few helpful commands to read the journal:
+
+## Read the entire journal
+
+```shell
+$ journalctl
+
+-- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:28:45 UTC. --
+Dec 22 00:10:21 localhost systemd-journal[33]: Runtime journal is using 184.0K (max 49.9M, leaving 74.8M of free 499.0M, current limit 49.9M).
+Dec 22 00:10:21 localhost systemd-journal[33]: Runtime journal is using 188.0K (max 49.9M, leaving 74.8M of free 499.0M, current limit 49.9M).
+Dec 22 00:10:21 localhost kernel: Initializing cgroup subsys cpuset
+Dec 22 00:10:21 localhost kernel: Initializing cgroup subsys cpu
+Dec 22 00:10:21 localhost kernel: Initializing cgroup subsys cpuacct
+Dec 22 00:10:21 localhost kernel: Linux version 3.11.7+ (buildbot@10.10.10.10) (gcc version 4.6.3 (Gentoo Hardened 4.6.3 p1.13, pie-0.5.2)
+...
+1000s more lines
+```
+
+## Read entries for a specific service
+
+Read entries generated by a specific unit:
+
+```shell
+$ journalctl -u apache.service
+
+-- Logs begin at Fri 2013-12-13 23:43:32 UTC, end at Sun 2013-12-22 12:32:52 UTC. --
+Dec 22 12:32:39 localhost systemd[1]: Starting Apache Service...
+Dec 22 12:32:39 localhost systemd[1]: Started Apache Service.
+Dec 22 12:32:39 localhost docker[9772]: /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted)
+Dec 22 12:32:39 localhost docker[9772]: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6 for ServerName
+```
+
+## Read the user journal from the current user
+
+It might be required to add a user different from `core` user to the `systemd-journal` group to read the user journal. It can be done with this Butane config:
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: flatcar
+ groups:
+ - systemd-journal
+```
+
+Then from `flatcar` login: `journalctl --user`.
+
+## Read entries since boot
+
+Reading just the entries since the last boot is an easy way to troubleshoot services that are failing to start properly:
+
+```shell
+journalctl --boot
+```
+
+## Tail the journal
+
+You can tail the entire journal or just a specific service:
+
+```shell
+journalctl -f
+```
+
+```shell
+journalctl -u apache.service -f
+```
+
+## Read entries with line wrapping
+
+By default `journalctl` passes `FRSXMK` command line options to [`less`](http://linux.die.net/man/1/less). You can override these options by setting a custom [`SYSTEMD_LESS`](http://www.freedesktop.org/software/systemd/man/journalctl.html#%24SYSTEMD_LESS) environment variable with omitted `S` option:
+
+```shell
+SYSTEMD_LESS=FRXMK journalctl
+```
+
+Read logs without pager:
+
+```shell
+journalctl --no-pager
+```
+
+## Debugging journald
+
+If you've faced some problems with journald you can enable debug mode following the instructions below.
+
+### Enable debugging manually
+
+```shell
+mkdir -p /etc/systemd/system/systemd-journald.service.d/
+```
+
+Create a [Drop-In][drop-ins] `/etc/systemd/system/systemd-journald.service.d/10-debug.conf` with following content:
+
+```shell
+[Service]
+Environment=SYSTEMD_LOG_LEVEL=debug
+```
+
+And restart `systemd-journald` service:
+
+```shell
+systemctl daemon-reload
+systemctl restart systemd-journald
+dmesg | grep systemd-journald
+```
+
+## Enable debugging via a Butane Config
+
+Define a [Drop-In][drop-ins] in a [Butane Config][butane-configs]:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: systemd-journald.service
+ dropins:
+ - name: 10-debug.conf
+ contents: |
+ [Service]
+ Environment=SYSTEMD_LOG_LEVEL=debug
+```
+
+[drop-ins]: ../systemd/drop-in-units
+[butane-configs]: ../../provisioning/config-transpiler
+
+## More information
+
+Getting Started with systemd
+Network Configuration with networkd
diff --git a/content/docs/latest/setup/releases/_index.md b/content/docs/latest/setup/releases/_index.md
new file mode 100644
index 00000000..1eafb50e
--- /dev/null
+++ b/content/docs/latest/setup/releases/_index.md
@@ -0,0 +1,10 @@
+---
+title: Managing Releases
+description: >
+ Guides to help you select which Flatcar release to run on your instances,
+ how to set the update configuration, verify the image if you download it
+ manually.
+weight: 20
+aliases:
+ - ../../clusters/management
+---
diff --git a/content/docs/latest/setup/releases/switching-channels.md b/content/docs/latest/setup/releases/switching-channels.md
new file mode 100644
index 00000000..47a82982
--- /dev/null
+++ b/content/docs/latest/setup/releases/switching-channels.md
@@ -0,0 +1,146 @@
+---
+title: Switching release channels
+description: How to switch to a different release channel.
+weight: 10
+aliases:
+ - ../../os/switching-channels
+ - ../../clusters/management/switching-channels
+---
+
+Flatcar Container Linux is designed to be updated automatically with different schedules per channel. You can [disable this feature](update-strategies), although we don't recommend it.
+Read the [release notes](https://flatcar-linux.org/releases) for specific features, bug fixes, and changes.
+A new major release always starts in the Alpha channel - which is for developers - where it passes multiple feature and bug fix iterations.
+Roughly every second major Alpha release is promoted to the Beta channel; the promotion is based on stability and on feature completeness.
+The Beta channel is for user consumption, so operators can validate compatibility with user workloads.
+In Beta, the release passes additional iterations and eventually fully stabilises, receiving bug fixes addressing issues with user workloads.
+Roughly every second major Beta release is promoted to Stable.
+Thus, the Stable channel gets no brand new major releases but instead gets the bug fix release of a new major release. It then continues to get bug fix releases.
+Any Stable major version remains supported until a new major Stable version is released.
+We generally recommend operators to follow releases in the Stable channel, with a few nodes on Beta for workload validation.
+Beta is generally considered ready for production.
+However, in edge cases new releases may show issues with certain user workloads.
+The Beta channel is an opportunity to validate early and to give feedback, so potential issues are fixed before they hit Stable.
+For low-maintenance scenarios there is the LTS channel which only gets bug fix releases.
+New major releases come out around once per year, marking a new LTS stream, and there is an overlap where the old stream still gets critical security updates.
+
+![Update Timeline](../../img/update-timeline.png)
+
+By default, Flatcar uses the public update server `public.update.flatcar-linux.net`.
+It promotes the new releases for each channel at the same time they are published.
+If you need more control about the update rollout, you can have a look at the possible [reboot strategies and manual update methods](update-strategies).
+The other alternative is running your own update server which allows you to control the update rollout over your fleet and even divide it into groups that have different rollout policies and release versions.
+The [Nebraska](nebraska) Open Source project implements the update server and is also used for our public instance.
+More on it below and on the Nebraska [docs site](nebraska-docs).
+
+## Customizing channel configuration
+
+An installed image will by default follow the channel it was published in.
+The cloud vendor images (e.g., Alpha, Beta, Stable) and the installer option (`flatcar-install -C `) are the recommended way of selecting the last release of a channel.
+The update client `update-engine` sources its configuration from `/usr/share/flatcar/update.conf` (baked into the image) and `/etc/flatcar/update.conf` (for user overwrites).
+The former file contains the default hardcoded configuration from the running OS version. Its values cannot be edited, but they can be overridden by the ones in the latter file.
+
+To switch a machine to a different channel, specify the new channel group in `/etc/flatcar/update.conf`:
+
+```ini
+GROUP=beta
+```
+
+The machine should check for an update within an hour.
+
+The public Nebraska update service does not offer downgrades.
+If you're switching from a channel with a higher Flatcar Container Linux version than the new channel, your machine won't be updated again until the new channel contains a higher version number.
+To force an update, use the `flatcar-update` tool (see below) or overwrite your current version.
+
+If you don't use `flatcar-update`, overwrite your version with these steps to force a downgrade:
+
+```shell
+sudo rm -f /tmp/release
+sudo umount /usr/share/coreos/release || true
+cp /usr/share/coreos/release /tmp/release
+sed -E -i "s/(COREOS_RELEASE_VERSION=)(.*)/\10.0.0/" /tmp/release
+sudo mount --bind /tmp/release /usr/share/coreos/release
+```
+
+**Note:** After the update is downloaded and the system is ready to reboot, remove the `GROUP` entry again from `/etc/flatcar/update.conf` because the new update has it as default and there is no need to hardcode it there.
+
+### Freezing an LTS stream
+
+A new LTS major version / stream is released roughly once per year.
+Each LTS major release stream has an 18 month support cycle, so there's a 6 month overlap between new major releases.
+
+The public update channel `GROUP=lts` points to the current LTS release stream.
+This means that it always provides the latest LTS release and, therefore, by default a major version jump happens when, e.g., the current LTS stream is switched over from `lts-2021` to `lts-2022`.
+Since this can be disruptive depending on the customizations and deployed software, the recommendation is to freeze the LTS stream on deployment and manually switch to a newer LTS stream at one's own pace each year.
+
+The entry in `/etc/flatcar/update.conf` to opt-out of major version updates can be added via Ignition or manually (here for only receiving updates for the LTS 2022 stream, i.e., release major version 3033):
+
+```
+GROUP=lts-2022
+```
+
+An alternative is to manage the update rollout through an own Nebraska update server where your manage your own `lts` group (see below).
+
+## Jump to another channel with `flatcar-update`
+
+With the `flatcar-update` tool you can jump to any release, also from other channels, making you effectively switch the channel. It's worth checking that you didn't hardcode a particular channel as `GROUP` in `/etc/flatcar/update.conf`.
+
+```shell
+$ # In case another channel is set as GROUP, first remove it so that in the future the channel from the new release gets used:
+$ sudo sed -i "/GROUP=.*/d" /etc/flatcar/update.conf
+$ # Set the channel you want to jump to:
+$ CHANNEL=beta
+$ VER=$(curl -fsSL "https://$CHANNEL.release.flatcar-linux.net/amd64-usr/current/version.txt" | grep FLATCAR_VERSION= | cut -d = -f 2)
+$ sudo flatcar-update --to-version "$VER"
+```
+
+## Use a personal update server
+
+When setting up your own [Nebraska](nebraska) update server you will be able to point your Flatcar machines to fetch its updates from it.
+Nebraska's web interface allows to create custom groups that are used to specify update rollout policies, and custom channels that specify the Flatcar version.
+Multiple groups can point to the same channel. The Nebraska web interface also gives an overview about the machines and their update status.
+
+It is recommended to start Nebraska with the `-enable-syncer` flag which keeps the Stable, Beta, Alpha, and LTS channels in sync with the public server.
+The default sync interval is one hour but may be shortened (Nebraska option `-sync-interval`). You need to create the `lts-2022` and similar channels if they don't exist on your instance.
+To specify a particular Flatcar version you want to deploy, you should not modify the `stable` *channel* because this gets synced with the public server and your changes are lost.
+You should rather create a new channel and let the `stable` *group* point to it.
+When using your own Nebraska update server, the `lts` group is not switched over to point a new `lts-YEAR` channel when a new channel comes out.
+To migrate the machines to the new LTS major release, first create the `lts-YEAR` channel on your instance since it may not exist, wait for the syncing to pick up the latest version for the channel, and then let the `lts` group point to the new channel.
+This needs to be done manually in Nebraska and it offers the advantage that the `lts` group which is the default for LTS installations can be kept and no changes on the machines themselves are necessary.
+
+For machines with restricted Internet access the Nebraska `-host-flatcar-packages` option lets Nebraska store the update payloads locally when syncing from the public server, and the machines will get your Nebraska's URL to fetch them.
+
+Here is how to configure a machine through `/etc/flatcar/update.conf` to get updates from your personal Nebraska server:
+
+```
+SERVER=http://your.nebraska.host:port/v1/update/
+GROUP=myproduction
+```
+
+More specifics about Nebraska can be found on its [docs site](nebraska-docs).
+
+
+## Debugging
+
+The live status of updates checking can queried via:
+
+```shell
+update_engine_client --status
+```
+
+The update engine logs all update attempts, which can inspected in the system journal:
+
+```shell
+journalctl -f -u update-engine
+```
+
+For reference, the OS version and channel for a running system can be determined via:
+
+```shell
+cat /usr/share/flatcar/os-release
+cat /usr/share/flatcar/update.conf
+```
+
+Note: while a manual channel switch is in progress, `/usr/share/flatcar/update.conf` shows the channel for the current OS while `/etc/flatcar/update.conf` shows the one for the next update.
+
+[nebraska]: https://github.com/kinvolk/nebraska/
+[nebraska-docs]: https://kinvolk.io/docs/nebraska/latest
diff --git a/content/docs/latest/setup/releases/update-conf.md b/content/docs/latest/setup/releases/update-conf.md
new file mode 100644
index 00000000..f7b1794d
--- /dev/null
+++ b/content/docs/latest/setup/releases/update-conf.md
@@ -0,0 +1,67 @@
+---
+content_type: reference
+title: Flatcar Container Linux update.conf specification
+linktitle: update.conf
+description: Fields and Location of the Flatcar update configuration file.
+weight: 100
+aliases:
+ - ../../os/update-conf
+ - ../../clusters/management/update-conf
+---
+
+Flatcar Container Linux uses [`update_engine`][update_engine] to check and fetch new updates from the [Nebraska Update Service](https://github.com/kinvolk/nebraska).
+
+## Location
+
+The client-side configuration of these updates is stored in `/etc/flatcar/update.conf`.
+(there is a legacy symlink of `/etc/coreos -> /etc/flatcar` for compatibility with third-party integrations).
+This file is in the user writable partition by default.
+
+Next order of client-side configurations that are checked are:
+
+* `/usr/share/flatcar/update.conf`
+ * Generated at build time of the image/payload build
+ * will typically contain:
+ * `SERVER=`
+ * `GROUP=`
+* `/usr/share/flatcar/release`
+ * Generated at build time of the image/payload build
+ * will typically contain:
+ * `FLATCAR_RELEASE_VERSION=`
+ * `FLATCAR_RELEASE_BOARD=`
+ * `FLATCAR_RELEASE_APPID=`
+
+## Fields
+
+Default installs of Flatcar will likely not need custom settings, and an empty or non-existing `/etc/flatcar/update.conf` file is sufficient.
+
+* `GROUP`
+ * The channel/group this host will pull updates from
+ * public channels will be: `stable`, `beta`, `alpha` - since this value is also part of the OS image under `/usr/share/flatcar/update.conf` you should only overwrite it if needed
+ * Nebraska supports group aliases that can be used instead of UUIDs
+* `SERVER`
+ * The update server to reach for updates
+ * default community server is: https://public.update.flatcar-linux.net/v1/update/
+ * An invalid URL like `disabled` will effectively disable downloading of updates while still allowing update-engine to mark a booted partition as successful, with the `flatcar-update` command you can use this instead of masking `update-engine.service`
+* `FLATCAR_RELEASE_VERSION`
+ * The current version of this machine
+* `FLATCAR_RELEASE_BOARD`
+ * The board build is determined by the architecture of the machine
+* `FLATCAR_RELEASE_APPID`
+ * The Flatcar specific application ID
+ * For Flatcar this is: `{e96281a6-d1af-4bde-9a0a-97b76e56dc57}`
+* `PCR_POLICY_SERVER`
+ * Server to receive the `POST`'ed TPM PCR Policy
+* `DOWNLOAD_USER`
+ * Authentication user for fetching the update payload
+ * As the update server can redirect to a payload download that may require its own authentication
+* `DOWNLOAD_PASSWORD`
+ * Authentication password for fetching the update payload
+ * As the update server can redirect to a payload download that may require its own authentication
+* `MACHINE_ALIAS`
+ * Optional human-friendly name for the machine in addition to the machine ID from `/etc/machine-id`, to be displayed in the update server UI, should be unique but this is not enforced, use quotes if it contains whitespace
+ * Set this dynamically by running, e.g., `sudo sed -i "/MACHINE_ALIAS=.*/d" /etc/flatcar/update.conf ; echo "MACHINE_ALIAS=$(hostname)" | sudo tee -a /etc/flatcar/update.conf` for the output of the `hostname` command (as with the other variables, restarting `update-engine.service` is not needed)
+
+_(for future-proofing, calling `git grep GetConfValue\(\"` in the [`update_engine`][update_engine] repo)_
+
+[update_engine]: https://github.com/flatcar/update_engine
diff --git a/content/docs/latest/setup/releases/update-strategies.md b/content/docs/latest/setup/releases/update-strategies.md
new file mode 100644
index 00000000..f4466753
--- /dev/null
+++ b/content/docs/latest/setup/releases/update-strategies.md
@@ -0,0 +1,347 @@
+---
+title: Update and reboot strategies
+description: How to configure when you Flatcar instances should reboot.
+weight: 30
+aliases:
+ - ../../os/update-strategies
+ - ../../clusters/creation/update-strategies
+---
+
+The overarching goal of Flatcar Container Linux is to secure the Internet's backend infrastructure. We believe that automatically updating the operating system is one of the best tools to achieve this goal.
+
+We realize that each Flatcar Container Linux cluster has a unique tolerance for risk and the operational needs of your applications are complex. In order to meet everyone's needs, there are different update/reboot strategies that we have developed.
+
+This document is about the update client and how it consumes the updates when they get available.
+The public update server makes the new releases available as soon as they get published.
+To control this part of the update rollout, look at the different [public update channels and how you can run your own update server](../switching-channels/).
+
+It's important to note that updates are always downloaded to the passive partition when they become available (see further below for disabling automatic updates). A reboot is the last step of the update, where the active and passive partitions are swapped ([rollback instructions][rollback]).
+
+The reboot is done by the reboot manager, by default this is the `locksmithd.service` included on the image.
+For Kubernetes the recommended reboot manager is [FLUO](https://github.com/flatcar/flatcar-linux-update-operator/) which replaces locksmithd because it knows how to gracefully reboot a Kubernetes node.
+The [kured](https://github.com/weaveworks/kured) reboot manager will be supported as well starting from Flatcar versions with a release number greater than `3067.0.0`.
+
+The `update-engine.service` responsible for downloading and applying the updates can be in different states which you can query with `update_engine_client -status`:
+
+- `UPDATE_STATUS_IDLE` (did not find an update)
+- `UPDATE_STATUS_CHECKING_FOR_UPDATE`
+- `UPDATE_STATUS_UPDATE_AVAILABLE` (can be a result of `update_engine_client -check_for_update`)
+- `UPDATE_STATUS_DOWNLOADING`
+- `UPDATE_STATUS_VERIFYING`
+- `UPDATE_STATUS_FINALIZING`
+- `UPDATE_STATUS_UPDATED_NEED_REBOOT` (update applied to inactive partition, this is where the reboot manager comes in)
+- `UPDATE_STATUS_REPORTING_ERROR_EVENT` (error encountered, use `journalctl -u update-engine -e` to get more info)
+
+## Locksmithd reboot strategies
+
+These locksmithd strategies control how a reboot occurs when update-engine indicates that one is needed:
+
+| Strategy | Description |
+|---------------|------------------------------------------------------------------------------|
+| `etcd-lock` | Reboot after first taking a distributed lock in etcd (reboot window applies) |
+| `reboot` | Reboot immediately after an update is applied (reboot window applies) |
+| `off` | Do not reboot after updates are applied |
+
+You can configure a reboot window in which reboots are allowed to happen through one of the strategies.
+
+The default behavior is `reboot` and results in a reboot with a delay of 5 minutes.
+
+## Reboot strategy options through Butane/Ignition
+
+The reboot strategy can be set with the following Butane Config section:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/flatcar/update.conf
+ overwrite: true
+ contents:
+ inline: |
+ REBOOT_STRATEGY=etcd-lock
+ mode: 0420
+```
+
+This gets transpiled to the following Ignition configuration which writes the line `REBOOT_STRATEGY="etcd-lock"` to `/etc/flatcar/update.conf`:
+
+```json
+{
+ "ignition": {
+ "version": "3.3.0"
+ },
+ "storage": {
+ "files": [
+ {
+ "overwrite": true,
+ "path": "/etc/flatcar/update.conf",
+ "contents": {
+ "compression": "",
+ "source": "data:,REBOOT_STRATEGY%3Detcd-lock%0A"
+ },
+ "mode": 272
+ }
+ ]
+ }
+}
+```
+
+### etcd-lock
+
+The `etcd-lock` strategy mandates that each machine acquire and hold a reboot lock before it is allowed to reboot. The main goal behind this strategy is to allow for an update to be applied to a cluster quickly, without losing the quorum membership in etcd or rapidly reducing capacity for the services running on the cluster. The reboot lock is held until the machine releases it after a successful update.
+
+The number of machines allowed to reboot simultaneously is configurable via a command line utility:
+
+```shell
+$ locksmithctl set-max 4
+Old: 1
+New: 4
+```
+
+This setting is stored in etcd so it won't have to be configured for subsequent machines.
+
+To view the number of available slots and find out which machines in the cluster are holding locks, run:
+
+```shell
+$ locksmithctl status
+Available: 0
+Max: 1
+
+MACHINE ID
+69d27b356a94476da859461d3a3bc6fd
+```
+
+If needed, you can manually clear a lock by providing the machine ID:
+
+```shell
+locksmithctl unlock 69d27b356a94476da859461d3a3bc6fd
+```
+
+### Reboot immediately
+
+The `reboot` strategy works exactly like it sounds: the machine is rebooted as soon as the update has been installed to the passive partition. If the applications running on your cluster are highly resilient, this strategy was made for you.
+
+### Off
+
+The `off` strategy is also straightforward. The update will be installed onto the passive partition and await a reboot command to complete the update. We don't recommend this strategy unless you reboot frequently as part of your normal operations workflow.
+
+Read below on how to _disable automatic updates_ if this is what you actually want to achieve instead of having a half applied update on disk that gets selected even on an accidental reboot. It is also blocking the inactive partition with the earliest version that gets available as update, requiring the _double update workaround_ at the end of this document.
+
+## Auto-updates with a maintenance window
+
+Locksmith supports maintenance windows in addition to the reboot strategies mentioned earlier. Maintenance windows define a window of time during which a reboot can occur. These operate in addition to reboot strategies, so if the machine has a maintenance window and requires a reboot lock, the machine will only reboot when it has the lock during that window.
+
+Windows are defined by a start time and a length. In this example, the window is defined to be every Thursday between 04:00 and 05:00:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/flatcar/update.conf
+ overwrite: true
+ contents:
+ inline: |
+ REBOOT_STRATEGY=reboot
+ LOCKSMITHD_REBOOT_WINDOW_START=Thu 04:00
+ LOCKSMITHD_REBOOT_WINDOW_LENGTH=1h
+ mode: 0420
+```
+
+This will configure a Flatcar Container Linux machine to follow the `reboot` strategy, and thus when an update is ready it will simply reboot instead of attempting to grab a lock in etcd. This machine however has also been configured to only reboot between 04:00 and 05:00 on Thursdays, so if an update occurs outside of this window the machine will then wait until it is inside of this window to reboot.
+
+For more information about the supported syntax, refer to the [Locksmith documentation][reboot-windows].
+
+## Updating PXE/iPXE machines
+
+PXE/iPXE machines download a new copy of Flatcar Container Linux every time they are started thus are dependent on the version of Flatcar Container Linux they are served. If you don't automatically load new Flatcar Container Linux images into your PXE/iPXE server, your machines will never have new features or security updates.
+
+An easy solution to this problem is to use iPXE and reference images [directly from the Flatcar Container Linux storage site][ipxe-boot-script]. The `alpha` URL is automatically pointed to the new version of Flatcar Container Linux as it is released.
+
+In case you never install to disk but only run the PXE image in memory, you would still need a manual reboot to switch to new versions. To address that, consider running the external tool [flatcar-pxe-update-engine](https://github.com/utilitywarehouse/flatcar-pxe-update-engine) for automatic reboots with locksmith as discussed in the sections above.
+
+## Disable Automatic Updates
+
+If for a short time frame you want to temporarily disable update reboots, run `sudo systemctl stop update-engine locksmithd`, and when done, `sudo systemctl start update-engine locksmithd`.
+
+In case when you want to permanently disable automatic updates, it's not recommended to mask the services because it makes it harder to manually apply updates.
+It's rather recommended to overwrite the `SERVER` variable in the update configuration to an invalid value.
+
+You can configure this with a Butane Config (needs to be [transpiled][transpiler] to Ignition JSON):
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/flatcar/update.conf
+ overwrite: true
+ mode: 0644
+ contents:
+ inline: |
+ SERVER=disabled
+```
+
+To manually run updates, remove the file and run `update_engine_client -update` or wait for the update to happen.
+After update-engine applied the update to the passive partition, you can already create the file again to disable automatic updates.
+
+The `flatcar-update` tool automatically removes the `SERVER=disabled` line to apply a manual update and restores it after applying the update (it also has an explicit `--disable-afterwards` switch to set `SERVER=disabled`):
+
+```shell
+$ # For example, update to the latest Stable release:
+$ VER=$(curl -fsSL https://stable.release.flatcar-linux.net/amd64-usr/current/version.txt | grep FLATCAR_VERSION= | cut -d = -f 2)
+$ sudo flatcar-update --to-version $VER
+```
+
+In case you didn't have `SERVER=disabled` set you should use the `--disable-afterwards` switch to set `SERVER=disabled` in `/etc/flatcar/update.conf` which disables updates.
+Disabling updates ensures that you will stay on the version you specified.
+
+After applying the update, wait for the reboot to happen or invoke it manually.
+
+As alternative you could mask the update-engine and locksmithd services as follows (but read the warning below):
+
+```
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: update-engine.service
+ mask: true
+ - name: locksmithd.service
+ mask: true
+```
+
+**Note:** As said, it's not recommended to mask the services but if you want to manually trigger an update after having masked `update-engine`,
+you'll need to unmask the service, start `update-engine` to trigger an update, and
+**keep the service unmasked** until the next reboot is completed and `update-engine` started
+and marked the updated partition as successful.
+Otherwise, the update will be considered unsuccessful and in all following reboots GRUB will use the
+old partition again because `update-engine` never marked the new partition to be successfully booted.
+
+To check that you can stop and mask `update-engine` after the reboot, run these commands to see that
+the partition was marked as successful. This will happen after the service ran for about 1 minute:
+
+```shell
+$ sudo cgpt show "$(rootdev -s /usr)" | grep successful=1
+ Attr: priority=1 tries=0 successful=1
+```
+
+## Airgapped updates
+
+Updating a machine without Internet access is done in two steps.
+First, you need to download the update payload on a non-airgapped machine, then you copy it to your airgapped machine and run the `flatcar-update` tool.
+If the `flatcar-update` tool is missing on your machine, [download](https://raw.githubusercontent.com/flatcar/init/flatcar-master/bin/flatcar-update) it first, too.
+
+On the non-airgapped machine (here for amd64, use `ARCH=arm64` for arm64):
+
+```shell
+ARCH=amd64
+VER=$(curl -fsSL https://stable.release.flatcar-linux.net/${ARCH}-usr/current/version.txt | grep FLATCAR_VERSION= | cut -d = -f 2)
+echo "$VER"
+# or if you know which version to update to, set it like VER=3033.2.1 (no channel info needed)
+wget "https://update.release.flatcar-linux.net/${ARCH}-usr/${VER}/flatcar_production_update.gz"
+```
+
+On the airgapped machine (here with the file `flatcar_production_update.gz` in the current folder):
+
+```shell
+VER=... # use the same value as above
+sudo ./flatcar-update --to-version "$VER" --to-payload flatcar_production_update.gz
+```
+
+Then reboot or wait for the reboot coordinator to do so.
+
+## Updating behind a proxy
+
+Public Internet access is required to contact CoreUpdate and download new versions of Flatcar Container Linux. If direct access is not available the `update-engine` service may be configured to use a HTTP or SOCKS proxy using curl-compatible environment variables, such as `HTTPS_PROXY` or `ALL_PROXY`.
+See [curl's documentation](http://curl.haxx.se/docs/manpage.html#ALLPROXY) for details.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: update-engine.service
+ dropins:
+ - name: 50-proxy.conf
+ contents: |
+ [Service]
+ Environment=ALL_PROXY=http://proxy.example.com:3128
+```
+
+Proxy environment variables can also be set [system-wide][systemd-env-vars].
+
+## Manually triggering an update
+
+Each machine should check in about 10 minutes after boot and roughly every hour after that. If you'd like to see it sooner, you can force an update check, which will skip any rate-limiting settings that are configured in CoreUpdate.
+
+```shell
+$ update_engine_client -check_for_update
+[0123/220706:INFO:update_engine_client.cc(245)] Initiating update check and install.
+```
+
+### Double update workaround
+
+If you have disabled automatic reboots, and your host has already applied an update then your flatcar host will not apply a _newer_ update until it has rebooted into the prior-applied update.
+( i.e. Host is in `UPDATE_STATUS_UPDATED_NEED_REBOOT` state).
+To work around this intermediate reboot, one can call:
+
+```shell
+update_engine_client -reset_status
+update_engine_client -check_for_update
+```
+
+### Management of config files
+
+Since Alpha 3535.0.0 the OS config files under `/etc` are updated through the overlay mount as long as they are not modified.
+On boot any files in `/etc` that are the same as provided by the booted `/usr/share/flatcar/etc` default for the overlay mount on `/etc` are deleted to ensure that future updates of `/usr/share/flatcar/etc` are propagated. To opt out, create `/etc/.no-dup-update` in case you want to keep an unmodified config file as is or because you fear that a future Flatcar version may use the same file as you at which point your copy is cleaned up and any other future Flatcar changes would be applied.
+
+To find out the differences of your machine compared to the OS defaults, run:
+
+```sh
+sudo git diff --no-index /usr/share/flatcar/etc /etc
+```
+
+You can also see what files got created under the real `/etc` with the following commands:
+
+```sh
+sudo unshare -m sh -c "umount /etc; ls -lahR /etc"
+```
+
+### Configure a post-install update hook
+
+Sometimes you may want to run a custom action after update-engine wrote the new partition out.
+You can create a `/usr/share/oem/bin/oem-postinst` script that gets two arguments passed.
+The first argument is the slot (`A` or `B`), the second is the temporary mount point where the new `/usr` partition contents can be accessed.
+Since the hook runs shortly before the new partition is prioritized, you should not directly reboot there.
+Also you should only let the script exit with an error return code if you want to stop the update.
+The hook runs as root user process under the `update-engine.service` unit.
+
+The following Butane example shows how a custom reboot hook for `kured` can be added to old Flatcar releases that don't support it yet (for release number greater than `3067.0.0` this is not needed).
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: /dev/disk/by-label/OEM
+ format: btrfs
+ label: OEM
+ path: /oem
+ directories:
+ - path: /oem/bin
+ mode: 0755
+ files:
+ - path: /oem/bin/oem-postinst
+ mode: 0755
+ contents:
+ inline: |
+ #!/bin/sh
+ touch /run/reboot-required
+```
+
+[ipxe-boot-script]: ../../installing/bare-metal/booting-with-ipxe#setting-up-ipxe-boot-script
+[rollback]: ../debug/manual-rollbacks
+[reboot-windows]: https://github.com/flatcar/locksmith#reboot-windows
+[systemd-env-vars]: ../systemd/environment-variables/#system-wide-environment-variables
+[transpiler]: ../../provisioning/config-transpiler/
diff --git a/content/docs/latest/setup/releases/verify-images.md b/content/docs/latest/setup/releases/verify-images.md
new file mode 100644
index 00000000..80476459
--- /dev/null
+++ b/content/docs/latest/setup/releases/verify-images.md
@@ -0,0 +1,56 @@
+---
+title: Verify Flatcar Container Linux images with GPG
+linktitle: Verifying Images
+description: How to verify the authenticity of Flatcar Container Linux images, using GPG.
+weight: 40
+aliases:
+ - ../../os/verify-images
+ - ../../clusters/creation/verify-images
+---
+
+Kinvolk publishes new Flatcar Container Linux images for each release across a variety of platforms and hosting providers. Each channel has its own set of images ([stable], [beta], [alpha]) that are posted to our storage site. Along with each image, a signature is generated from the [Flatcar Container Linux Image Signing Key][signing-key] and posted.
+
+[signing-key]: https://www.flatcar.org/security/image-signing-key/
+[stable]: https://stable.release.flatcar-linux.net/amd64-usr/current/
+[beta]: https://beta.release.flatcar-linux.net/amd64-usr/current/
+[alpha]: https://alpha.release.flatcar-linux.net/amd64-usr/current/
+
+After downloading your image, you should verify it with `gpg` tool. First, download the image signing key:
+
+```shell
+curl -L -O https://www.flatcar.org/security/image-signing-key/Flatcar_Image_Signing_Key.asc
+```
+
+Next, import the public key and verify that the ID matches the website: [Flatcar Image Signing Key][signing-key]
+
+```shell
+gpg --import --keyid-format LONG Flatcar_Image_Signing_Key.asc
+gpg: key E25D9AED0593B34A: public key "Flatcar Buildbot (Official Builds) " imported
+gpg: Total number processed: 1
+gpg: imported: 1
+```
+
+Optionally, if you have your own gpg key, mark the key as valid in the local trustdb:
+```shell
+gpg --lsign-key E25D9AED0593B34A
+```
+
+Now we're ready to download an image and it's signature, ending in .sig. We're using the QEMU image in this example:
+
+```shell
+curl -L -O https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+curl -L -O https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2.sig
+```
+
+Verify image with `gpg` tool:
+
+```shell
+gpg --verify flatcar_production_qemu_image.img.bz2.sig
+gpg: assuming signed data in 'flatcar_production_qemu_image.img.bz2'
+gpg: Signature made Tue Aug 31 19:47:19 2021 CEST
+gpg: using RSA key 858A560F97C9AEB22EC1C732961DDDD5250D4A42
+gpg: issuer "buildbot@flatcar-linux.org"
+gpg: Good signature from "Flatcar Buildbot (Official Builds) "
+```
+
+The `Good signature` message indicates that the file signature is valid. Go launch some machines now that we've successfully verified that this Flatcar Container Linux image isn't corrupt, that it was authored by Kinvolk, and wasn't tampered with in transit.
diff --git a/content/docs/latest/setup/security/_index.md b/content/docs/latest/setup/security/_index.md
new file mode 100644
index 00000000..12e2df06
--- /dev/null
+++ b/content/docs/latest/setup/security/_index.md
@@ -0,0 +1,11 @@
+---
+title: Additional Security Options
+linktitle: Security Options
+description: >
+ Flatcar Container Linux has a number of security measures enabled by
+ default, but there's always more things that can be enabled if desired.
+ These guides provide information on what additional options can be set.
+weight: 45
+aliases:
+ - ../clusters/securing
+---
diff --git a/content/docs/latest/setup/security/adding-certificate-authorities.md b/content/docs/latest/setup/security/adding-certificate-authorities.md
new file mode 100644
index 00000000..14411568
--- /dev/null
+++ b/content/docs/latest/setup/security/adding-certificate-authorities.md
@@ -0,0 +1,23 @@
+---
+title: Custom certificate authorities
+description: How to add and configure your own CAs.
+weight: 30
+aliases:
+ - ../../os/adding-certificate-authorities
+ - ../../clusters/securing/adding-certificate-authorities
+---
+
+Flatcar Container Linux supports custom Certificate Authorities (CAs) in addition to the default list of trusted CAs. Adding your own CA allows you to:
+
+- Use a corporate wildcard certificate
+- Use your own CA to communicate with an installation of CoreUpdate
+
+The setup process for any of these use-cases is the same:
+
+1. Copy the PEM-encoded certificate authority file (usually with a `.pem` file name extension) to `/etc/ssl/certs`
+
+2. Run the `update-ca-certificates` script to update the system bundle of Certificate Authorities. All programs running on the system will now trust the added CA.
+
+## More information
+ * [Generate Self-Signed Certificates](generate-self-signed-certificates)
+ * [etcd Security Model](https://etcd.io/docs/v3.4.0/op-guide/security/)
diff --git a/content/docs/latest/setup/security/audit.md b/content/docs/latest/setup/security/audit.md
new file mode 100644
index 00000000..9e2d3348
--- /dev/null
+++ b/content/docs/latest/setup/security/audit.md
@@ -0,0 +1,40 @@
+---
+title: Setting up the Linux Auditing System
+linktitle: Set up audit
+description: Setting up the Linux Auditing System.
+weight: 20
+---
+
+On Flatcar Container Linux `audit-rules.service` loads the audit rules to set up the logging filters for the kernel messages.
+The `auditd.service` daemon to collect these logs does not run by default.
+
+# Enabling the standard rules or custom rules
+
+There is an ignore rule by default that suppresses the standard rules, which means that certain PAM audit messages are not shown.
+It is also important to remove this default ignore rule when setting up own rules, or otherwise they will be ignored, too.
+The following Butane Config will overwrite the default ignore rule:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/audit/rules.d/99-default.rules
+ overwrite: true
+ contents:
+ inline: |
+ # custom rules may go here, can be empty to use only the standard rules
+```
+
+# Enabling auditd
+
+In addition to the above, it may make sense to enable `auditd.service`, here a Butane Config snippet for that:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: auditd.service
+ enabled: true
+```
diff --git a/content/docs/latest/setup/security/customizing-sshd.md b/content/docs/latest/setup/security/customizing-sshd.md
new file mode 100644
index 00000000..87a14fdf
--- /dev/null
+++ b/content/docs/latest/setup/security/customizing-sshd.md
@@ -0,0 +1,195 @@
+---
+title: Customizing the SSH daemon
+description: How to change the way SSH runs.
+weight: 10
+aliases:
+ - ../../os/customizing-sshd
+ - ../../clusters/securing/customizing-sshd
+---
+
+Flatcar Container Linux defaults to running an OpenSSH daemon using `systemd` socket activation -- when a client connects to the port configured for SSH, `sshd` is started on the fly for that client using a `systemd` unit derived automatically from a template. In some cases you may want to customize this daemon's authentication methods or other configuration. This guide will show you how to do that at boot time using a [Butane Config][butane-configs], and after building by modifying the `systemd` unit file.
+
+As a practical example, when a client fails to connect by not completing the TCP connection (e.g. because the "client" is actually a TCP port scanner), the MOTD may report failures of `systemd` units (which will be named by the source IP that failed to connect) next time you log in to the Flatcar Container Linux host. These failures are not themselves harmful, but it is a good general practice to change how SSH listens, either by changing the IP address `sshd` listens to from the default setting (which listens on all configured interfaces), changing the default port, or both.
+
+## Customizing sshd with a Butane Config
+
+In this example we will disable logins for the `root` user, only allow login for the `core` user and disable password based authentication. For more details on what sections can be added to `/etc/ssh/sshd_config` see the [OpenSSH manual][openssh-manual].
+If you're interested in additional security options, Mozilla provides a well-commented example of a [hardened configuration][mozilla-ssh-rec].
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/ssh/sshd_config
+ overwrite: true
+ mode: 0600
+ contents:
+ inline: |
+ # Use most defaults for sshd configuration.
+ UsePrivilegeSeparation sandbox
+ Subsystem sftp internal-sftp
+ UseDNS no
+
+ PermitRootLogin no
+ AllowUsers core
+ AuthenticationMethods publickey
+```
+
+### Changing the sshd port (cloud-config)
+
+Flatcar Container Linux ships with socket-activated SSH daemon by default. The configuration for this can be found at `/usr/lib/systemd/system/sshd.socket`. We're going to override some of the default settings for this in the Butane Config provided at boot:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: sshd.socket
+ dropins:
+ - name: 10-sshd-port.conf
+ contents: |
+ [Socket]
+ ListenStream=
+ ListenStream=222
+```
+
+`sshd` will now listen only on port 222 on all interfaces when the system is built.
+
+### Disabling socket activation for sshd
+
+It may be desirable to disable socket-activation for sshd to ensure it will reliably accept connections even when systemd or dbus aren't operating correctly.
+
+To configure sshd on Flatcar Container Linux without socket activation, a Butane Config file similar to the following may be used:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: sshd.service
+ enabled: true
+ - name: sshd.socket
+ mask: true
+```
+
+Note that in this configuration the port will be configured by updating the `/etc/ssh/sshd_config` file with the `Port` directive rather than via `sshd.socket`.
+
+### Further reading
+
+Read the [full Butane Config][butane-configs] guide for more details on working with Butane Configs, including setting user's ssh keys.
+
+## Customizing sshd after first boot
+
+Since [Butane Configs][butane-configs] are only applied on first boot, existing machines will have to be configured in a different way.
+
+The following sections walk through applying the same changes documented above on a running machine.
+
+*Note*: To avoid incidentally locking yourself out of the machine, it's a good idea to double-check you're able to directly login to the machine's console, if applicable.
+
+### Customizing sshd\_config
+
+Since `/etc/ssh/sshd_config` is a symlink to a read only file in `/usr`, it
+needs to be replaced with a regular file before it may be edited.
+
+This, for example, can be done by running `sudo sed -i '' /etc/ssh/sshd_config`.
+
+At this point, any configuration changes can easily be applied by editing the file `/etc/ssh/sshd_config`.
+
+### Changing the sshd port
+
+The sshd.socket unit may be configured via systemd [drop-ins][drop-ins].
+
+To change how sshd listens, update the list of `ListenStream`s in the `[Socket]` section of the dropin.
+
+*Note*: `ListenStream` is a list of values with each line adding to the list. An empty value clears the list, which is why `ListenStream=` is necessary to prevent it from *also* listening on the default port `22`.
+
+To change just the listened-to port (in this example, port 222), create a dropin at `/etc/systemd/system/sshd.socket.d/10-sshd-listen-ports.conf`
+
+```ini
+# /etc/systemd/system/sshd.socket.d/10-sshd-listen-ports.conf
+[Socket]
+ListenStream=
+ListenStream=222
+```
+
+To change the listened-to IP address (in this example, 10.20.30.40):
+
+```ini
+# /etc/systemd/system/sshd.socket.d/10-sshd-listen-ports.conf
+[Socket]
+ListenStream=
+ListenStream=10.20.30.40:22
+FreeBind=true
+```
+
+You can specify both an IP and an alternate port in a single `ListenStream` line. IPv6 address bindings would be specified using the format `[2001:db8::7]:22`.
+
+*Note*: While specifying an IP address is optional, you must always specify the port, even if it is the default SSH port. The `FreeBind` option is used to allow the socket to be bound on addresses that are not yet configured on an interface, to avoid issues caused by delays in IP configuration at boot. (This option is required only if you are specifying an address.)
+
+Multiple ListenStream lines can be specified, in which case `sshd` will listen on all the specified sockets:
+
+```ini
+# /etc/systemd/system/sshd.socket.d/10-sshd-listen-ports.conf
+[Socket]
+ListenStream=
+ListenStream=222
+ListenStream=10.20.30.40:223
+FreeBind=true
+```
+
+### Activating changes
+
+After creating the dropin file, the changes can be activated by doing a daemon-reload and restarting `sshd.socket`
+
+```shell
+sudo systemctl daemon-reload
+sudo systemctl restart sshd.socket
+```
+
+We now see that systemd is listening on the new sockets:
+
+```shell
+$ systemctl status sshd.socket
+● sshd.socket - OpenSSH Server Socket
+ Loaded: loaded (/etc/systemd/system/sshd.socket; disabled; vendor preset: disabled)
+ Active: active (listening) since Wed 2015-10-14 21:04:31 UTC; 2min 45s ago
+ Listen: [::]:222 (Stream)
+ 10.20.30.40:223 (Stream)
+ Accepted: 1; Connected: 0
+...
+```
+
+And if we attempt to connect to port 22 on our public IP, the connection is rejected, but port 222 works:
+
+```shell
+$ ssh core@[public IP]
+ssh: connect to host [public IP] port 22: Connection refused
+$ ssh -p 222 core@[public IP]
+Flatcar Container Linux by Kinvolk stable (1353.8.0)
+core@machine $
+```
+
+### Disabling socket-activation for sshd
+
+Simply mask the systemd.socket unit:
+
+```shell
+systemctl mask --now sshd.socket
+```
+
+Finally, restart the sshd.service unit:
+
+```shell
+systemctl restart sshd.service
+```
+
+### Further reading on systemd units
+
+For more information about configuring Flatcar Container Linux hosts with `systemd`, see [Getting Started with systemd][systemd-getting-started].
+
+[drop-ins]: ../systemd/drop-in-units
+[systemd-getting-started]: ../systemd/getting-started
+[openssh-manual]: http://www.openssh.com/cgi-bin/man.cgi?query=sshd_config
+[mozilla-ssh-rec]: https://wiki.mozilla.org/Security/Guidelines/OpenSSH#Modern_.28OpenSSH_6.7.2B.29
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/setup/security/disabling-smt.md b/content/docs/latest/setup/security/disabling-smt.md
new file mode 100644
index 00000000..3fc7448a
--- /dev/null
+++ b/content/docs/latest/setup/security/disabling-smt.md
@@ -0,0 +1,70 @@
+---
+title: Disabling SMT on Flatcar Container Linux
+linktitle: Disabling SMT
+description: How to disable Simultaneous Multi-Threading.
+weight: 60
+aliases:
+ - ../../os/disabling-smt
+ - ../../clusters/securing/disabling-smt
+---
+
+Recent Intel CPU vulnerabilities ([L1TF] and [MDS]) cannot be fully mitigated in software without disabling Simultaneous Multi-Threading. This can have a substantial performance impact and is only necessary for certain workloads, so for compatibility reasons, SMT is enabled by default.
+
+In addition, the Intel [TAA] vulnerability cannot be fully mitigated without disabling either of SMT or the Transactional Synchronization Extensions (TSX). Disabling TSX generally has less performance impact, so is the preferred approach on systems that don't otherwise need to disable SMT. For compatibility reasons, TSX is enabled by default.
+
+SMT and TSX should be disabled on affected Intel processors under the following circumstances:
+
+1. A bare-metal host runs untrusted virtual machines, and [other arrangements][l1tf-mitigation] have not been made for mitigation.
+2. A bare-metal host runs untrusted code outside a virtual machine.
+
+SMT can be conditionally disabled by passing `mitigations=auto,nosmt` on the kernel command line. This will disable SMT only if required for mitigating a vulnerability. This approach has two caveats:
+
+1. It does not protect against unknown vulnerabilities in SMT.
+2. It allows future Flatcar Container Linux updates to disable SMT if needed to mitigate new vulnerabilities.
+
+Alternatively, SMT can be unconditionally disabled by passing `nosmt` on the kernel command line. This provides the most protection and avoids possible behavior changes on upgrades, at the cost of a potentially unnecessary reduction in performance.
+
+TSX can be conditionally disabled on vulnerable CPUs by passing `tsx=auto` on the kernel command line, or unconditionally disabled by passing `tsx=off`. However, neither setting takes effect on systems affected by MDS, since MDS mitigation automatically protects against TAA as well.
+
+For typical use cases, we recommend enabling the `mitigations=auto,nosmt` and `tsx=auto` command-line options.
+
+[L1TF]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html
+[l1tf-mitigation]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html#mitigation-selection-guide
+[MDS]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html
+[TAA]: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html
+
+## Configuring new machines
+
+The following Butane Config performs two tasks:
+
+1. Adds `mitigations=auto,nosmt tsx=auto` to the kernel command line. This affects the second and subsequent boots of the machine, but not the first boot.
+2. On the first boot, disables SMT at runtime if the system has an Intel processor. This is sufficient to protect against currently-known SMT vulnerabilities until the system is rebooted. After reboot, SMT will be re-enabled if the processor is not actually vulnerable.
+
+```yaml
+# Add kernel command-line arguments to automatically disable SMT or TSX
+# on CPUs where they are vulnerable.
+# Disable SMT on CPUs affected by MDS or similar vulnerabilities.
+# Disable TSX on CPUs affected by TAA but not by MDS.
+variant: flatcar
+version: 1.0.0
+kernel_arguments:
+ should_exist:
+ - mitigations=auto,nosmt
+ - tsx=auto
+```
+
+## Configuring existing machines
+
+To add `mitigations=auto,nosmt tsx=auto` to the kernel command line on an existing system, add the following line to `/usr/share/oem/grub.cfg`:
+
+```text
+set linux_append="$linux_append mitigations=auto,nosmt tsx=auto"
+```
+
+For example, using SSH:
+
+```shell
+ssh core@node01 'sudo sh -c "echo \"set linux_append=\\\"\\\$linux_append mitigations=auto,nosmt tsx=auto\\\"\" >> /usr/share/oem/grub.cfg && systemctl reboot"'
+```
+
+If you use locksmith for reboot coordination, replace `systemctl reboot` with `locksmithctl send-need-reboot`.
diff --git a/content/docs/latest/setup/security/fips.md b/content/docs/latest/setup/security/fips.md
new file mode 100644
index 00000000..f1e13c5a
--- /dev/null
+++ b/content/docs/latest/setup/security/fips.md
@@ -0,0 +1,132 @@
+---
+title: Flatcar Container Linux FIPS guide
+linktitle: FIPS mode
+description: Enabling FIPS mode.
+weight: 20
+---
+
+FIPS stands for Federal Information Processing Standards, a set of standards issued by the National Institute of Standards and Technology (NIST). While Flatcar is not officially FIPS certified, it is possible to deploy it so that it is compliant with two of these standards:
+* [FIPS 200][fips-200]
+* [FIPS 140-2][fips-140-2]
+
+# Enabling FIPS
+
+Booting the instance with the kernel parameter `fips=1` allows the instance to operate in a FIPS 200 mode. This means the kernel will use FIPS-compliant algorithms and will enforce some security practices like RSA key [size][rsa-key-size]. It's also recommended to create the empty file `/etc/system-fips` for other software (like cryptsetup).
+
+To confirm that FIPS mode is enabled on the Kernel, check the content of the file `/proc/sys/crypto/fips_enabled`:
+```bash
+$ cat /proc/sys/crypto/fips_enabled
+0 # disabled
+1 # enabled
+```
+
+or by inspecting boot logs:
+```bash
+$ journalctl --boot | grep -i "kernel: fips"
+Jun 27 18:07:22 localhost kernel: fips mode: enabled
+```
+
+# Enabling OpenSSL FIPS provider
+
+[OpenSSL][openssl] is an open-source library used for ciphering and hashing. As a library, it is widely used by programming software and third-party programs to ensure security. OpenSSL 3.0 FIPS provider is FIPS [validated][certificate] since Aug. 2022.
+
+OpenSSL FIPS module is built by default on Flatcar. Overwriting `/etc/ssl/openssl.cnf` with the following section will enable the provider:
+```
+config_diagnostics = 1
+openssl_conf = openssl_init
+# it includes the fipsmodule configuration
+.include /etc/ssl/fipsmodule.cnf
+[openssl_init]
+providers = provider_sect
+[provider_sect]
+fips = fips_sect
+base = base_sect
+[base_sect]
+activate = 1
+```
+
+NOTE: For Flatcar LTS-2023 (with OpenSSL < 3.0.8), it's still required to generate the fipsmodule configuration, see upstream [documentation][openssl-fipsinstall] on how to do it.
+
+Once again, it's possible to check that FIPS is enabled:
+```bash
+$ openssl list -providers
+Providers:
+ base
+ name: OpenSSL Base Provider
+ version: 3.0.8
+ status: active
+ fips
+ name: OpenSSL FIPS Provider
+ version: 3.0.8
+ status: active
+$ echo "Flatcar + FIPS" | openssl sha1 -
+SHA1(stdin)= ee2219bd6a234fa0e4436b475fc3b351e2dc85a0
+$ echo "Flatcar + FIPS" | openssl md5 -
+Error setting digest C0422ACDB57F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:349:Global default library context, Algorithm (MD5 : 104), Properties ()C0422ACDB57F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:252:
+```
+
+OpenSSL FIPS module is also being used by `cryptsetup` when running in FIPS mode (detection is based on `fips` kernel parameter and `/etc/system-fips` file).
+
+To check that cryptsetup runs in FIPS mode, it's possible to add the `--verbose` flag:
+```bash
+$ cryptsetup --verbose luksFormat ./volume
+...
+Running in FIPS mode.
+Command successful.
+```
+
+_NOTE_: Formatting a LUKS device with `cryptsetup` on a non-FIPS instance will use `argon2id` as key derivation function. This algorithm is not FIPS-compliant, so it will be impossible to open the LUKS device on a FIPS instance. It is possible to have a FIPS-compatible LUKS device if it is formatted using `cryptsetup luksFormat --pbkdf=pbkdf2 ./my-volume` which is the default behavior on a Flatcar FIPS instance even if `--pbkdf=pbkdf2` is not specified.
+
+# Ignition provisioning
+
+The two sections above can be combined into one Ignition configuration, as follows.
+
+Starting from 3185.0.0 with Butane config:
+```yaml
+# To transpile it to Ignition config:
+# butane < config.yml > ignition.json
+---
+version: 1.0.0
+variant: flatcar
+kernel_arguments:
+ should_exist:
+ - fips=1
+storage:
+ files:
+ - path: /etc/system-fips
+ - path: /etc/ssl/openssl.cnf
+ overwrite: true
+ mode: 0644
+ contents:
+ inline: |
+ config_diagnostics = 1
+ openssl_conf = openssl_init
+ # it includes the fipsmodule configuration
+ .include /etc/ssl/fipsmodule.cnf
+ [openssl_init]
+ providers = provider_sect
+ [provider_sect]
+ fips = fips_sect
+ base = base_sect
+ [base_sect]
+ activate = 1
+```
+
+# Troubleshooting
+
+## SSH login does not work with OpenSSL FIPS provider
+
+It's possible to have a SSH connection refused when OpenSSL FIPS provider is enabled. Inspecting the SSHd logs:
+```bash
+Jun 28 07:58:39 localhost sshd[1080]: ssh_dispatch_run_fatal: Connection from 10.0.2.2 port 40192: invalid argument [preauth]
+```
+
+In this case, it is likely that one of the `Ciphers`, defined in the `/etc/ssh/sshd_config`, is not FIPS-compliant (like `chacha20-poly1305`).
+
+
+[fips-200]: https://csrc.nist.gov/publications/detail/fips/200/final
+[fips-140-2]: https://csrc.nist.gov/publications/detail/fips/140/2/final
+[rsa-key-size]: https://github.com/torvalds/linux/blob/941e3e7912696b9fbe3586083a7c2e102cee7a87/crypto/rsa_helper.c#L33-L37
+[openssl]: https://www.openssl.org/
+[openssl-fipsinstall]: https://www.openssl.org/docs/man3.0/man1/openssl-fipsinstall.html#EXAMPLES
+[certificate]: https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4282
diff --git a/content/docs/latest/setup/security/generate-self-signed-certificates.md b/content/docs/latest/setup/security/generate-self-signed-certificates.md
new file mode 100644
index 00000000..e94c20d2
--- /dev/null
+++ b/content/docs/latest/setup/security/generate-self-signed-certificates.md
@@ -0,0 +1,302 @@
+---
+title: Generate self-signed certificates
+description: How to create a certificate authority and generate certificates for servers, peers, and clients.
+weight: 30
+aliases:
+ - ../../os/generate-self-signed-certificates
+ - ../../container-runtimes/generate-self-signed-certificates
+---
+
+If you build Flatcar Container Linux cluster on top of public networks it is recommended to enable encryption for Flatcar Container Linux services to prevent traffic interception and man-in-the-middle attacks. For these purposes you have to use Certificate Authority (CA), private keys and certificates signed by CA. Let's use [cfssl][cfssl] and walk through the whole process to create all these components.
+
+**NOTE:** We will use basic procedure here. If your configuration requires advanced security options, please refer to official [cfssl][cfssl] documentation.
+
+## Download cfssl
+
+CloudFlare's distributes [cfssl][cfssl] source code on github page and binaries on [cfssl website][cfssl-bin].
+
+Our documentation assumes that you will run [cfssl][cfssl] on your local x86_64 Linux host.
+
+```shell
+mkdir ~/bin
+curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
+curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
+chmod +x ~/bin/{cfssl,cfssljson}
+export PATH=$PATH:~/bin
+```
+
+## Initialize a certificate authority
+
+First of all we have to save default `cfssl` options for future substitutions:
+
+```shell
+mkdir ~/cfssl
+cd ~/cfssl
+cfssl print-defaults config > ca-config.json
+cfssl print-defaults csr > ca-csr.json
+```
+
+### Certificate types which are used inside Flatcar Container Linux
+
+* **client certificate** is used to authenticate client by server. For example `etcdctl`, `etcd proxy`, or `docker` clients.
+* **server certificate** is used by server and verified by client for server identity. For example `docker` server or `kube-apiserver`.
+* **peer certificate** is used by etcd cluster members as they communicate with each other in both ways.
+
+### Configure CA options
+
+Now we can configure signing options inside `ca-config.json` config file. Default options contain following preconfigured fields:
+
+* profiles: **www** with `server auth` (TLS Web Server Authentication) X509 V3 extension and **client** with `client auth` (TLS Web Client Authentication) X509 V3 extension.
+* expiry: with `8760h` default value (or 365 days)
+
+For compliance let's rename **www** profile into **server**, create additional **peer** profile with both `server auth` and `client auth` extensions, and set expiry to 43800h (5 years):
+
+```json
+{
+ "signing": {
+ "default": {
+ "expiry": "43800h"
+ },
+ "profiles": {
+ "server": {
+ "expiry": "43800h",
+ "usages": [
+ "signing",
+ "key encipherment",
+ "server auth"
+ ]
+ },
+ "client": {
+ "expiry": "43800h",
+ "usages": [
+ "signing",
+ "key encipherment",
+ "client auth"
+ ]
+ },
+ "peer": {
+ "expiry": "43800h",
+ "usages": [
+ "signing",
+ "key encipherment",
+ "server auth",
+ "client auth"
+ ]
+ }
+ }
+ }
+}
+```
+
+You can also modify `ca-csr.json` Certificate Signing Request (CSR):
+
+```json
+{
+ "CN": "My own CA",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "US",
+ "L": "CA",
+ "O": "My Company Name",
+ "ST": "San Francisco",
+ "OU": "Org Unit 1",
+ "OU": "Org Unit 2"
+ }
+ ]
+}
+```
+
+And generate CA with defined options:
+
+```shell
+cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
+```
+
+You'll get following files:
+
+```text
+ca-key.pem
+ca.csr
+ca.pem
+```
+
+* Please keep `ca-key.pem` file in safe. This key allows to create any kind of certificates within your CA.
+* **\*.csr** files are not used in our example.
+
+### Generate server certificate
+
+```shell
+cfssl print-defaults csr > server.json
+```
+
+Most important values for server certificate are **Common Name (CN)** and **hosts**. We have to substitute them, for example:
+
+```json
+...
+ "CN": "coreos1",
+ "hosts": [
+ "192.168.122.68",
+ "ext.example.com",
+ "coreos1.local",
+ "coreos1"
+ ],
+...
+```
+
+Now we are ready to generate server certificate and private key:
+
+```shell
+cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server.json | cfssljson -bare server
+```
+
+Or without CSR json file:
+
+```shell
+echo '{"CN":"coreos1","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server -hostname="192.168.122.68,ext.example.com,coreos1.local,coreos1" - | cfssljson -bare server
+```
+
+You'll get following files:
+
+```text
+server-key.pem
+server.csr
+server.pem
+```
+
+### Generate peer certificate
+
+```shell
+cfssl print-defaults csr > member1.json
+```
+
+Substitute CN and hosts values, for example:
+
+```json
+...
+ "CN": "member1",
+ "hosts": [
+ "192.168.122.101",
+ "ext.example.com",
+ "member1.local",
+ "member1"
+ ],
+...
+```
+
+Now we are ready to generate member1 certificate and private key:
+
+```shell
+cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1
+```
+
+Or without CSR json file:
+
+```shell
+echo '{"CN":"member1","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer -hostname="192.168.122.101,ext.example.com,member1.local,member1" - | cfssljson -bare member1
+```
+
+You'll get following files:
+
+```text
+member1-key.pem
+member1.csr
+member1.pem
+```
+
+Repeat these steps for each `etcd` member hostname.
+
+### Generate client certificate
+
+```shell
+cfssl print-defaults csr > client.json
+```
+
+For client certificate we can ignore **hosts** values and set only **Common Name (CN)** to **client** value:
+
+```json
+...
+ "CN": "client",
+ "hosts": [""],
+...
+```
+
+Generate client certificate:
+
+```shell
+cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
+```
+
+Or without CSR json file:
+
+```shell
+echo '{"CN":"client","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client - | cfssljson -bare client
+```
+
+You'll get following files:
+
+```text
+client-key.pem
+client.csr
+client.pem
+```
+
+## TLDR
+
+### Download binaries
+
+```shell
+mkdir ~/bin
+curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
+curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
+chmod +x ~/bin/{cfssl,cfssljson}
+export PATH=$PATH:~/bin
+```
+
+### Create directory to store certificates
+
+```shell
+mkdir ~/cfssl
+cd ~/cfssl
+```
+
+### Generate CA and certificates
+
+```shell
+echo '{"CN":"CA","key":{"algo":"rsa","size":2048}}' | cfssl gencert -initca - | cfssljson -bare ca -
+echo '{"signing":{"default":{"expiry":"43800h","usages":["signing","key encipherment","server auth","client auth"]}}}' > ca-config.json
+export ADDRESS=192.168.122.68,ext1.example.com,coreos1.local,coreos1
+export NAME=server
+echo '{"CN":"'$NAME'","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -config=ca-config.json -ca=ca.pem -ca-key=ca-key.pem -hostname="$ADDRESS" - | cfssljson -bare $NAME
+export ADDRESS=
+export NAME=client
+echo '{"CN":"'$NAME'","hosts":[""],"key":{"algo":"rsa","size":2048}}' | cfssl gencert -config=ca-config.json -ca=ca.pem -ca-key=ca-key.pem -hostname="$ADDRESS" - | cfssljson -bare $NAME
+```
+
+### Verify data
+
+```shell
+openssl x509 -in ca.pem -text -noout
+openssl x509 -in server.pem -text -noout
+openssl x509 -in client.pem -text -noout
+```
+
+### Things to know
+
+* Don't put your `ca-key.pem` into a Butane Config when untrusted workloads are running on the machine that could access the instance metadata, it is recommended to store it in safe place. This key allows to generate as much certificates as possible.
+* Keep **key** files in safe. Don't forget to set proper file permissions, i.e. `chmod 0600 server-key.pem`.
+* Certificates in this **TLDR** example have both `server auth` and `client auth` X509 V3 extensions and you can use them with servers and clients' authentication.
+* You are free to generate keys and certificates for wildcard `*` address as well. They will work on any machine. It will simplify certificates routine but increase security risks.
+
+## More information
+
+For more examples, check out these documents:
+
+ * [Custom Certificate Authorities](adding-certificate-authorities)
+ * [etcd Security Model](https://etcd.io/docs/v3.4.0/op-guide/security/)
+
+[cfssl]: https://github.com/cloudflare/cfssl
+[cfssl-bin]: https://pkg.cfssl.org
diff --git a/content/docs/latest/setup/security/hardening-guide.md b/content/docs/latest/setup/security/hardening-guide.md
new file mode 100644
index 00000000..461340cd
--- /dev/null
+++ b/content/docs/latest/setup/security/hardening-guide.md
@@ -0,0 +1,116 @@
+---
+title: Flatcar Container Linux hardening guide
+linktitle: Hardening options
+description: Disabling unnecessary services and other hardening options.
+weight: 20
+aliases:
+ - ../../os/hardening-guide
+ - ../../clusters/securing/hardening-guide
+---
+
+This guide covers the basics of securing a Flatcar Container Linux instance. Flatcar Container Linux has a very slim network profile and the only service that listens by default on Flatcar Container Linux is sshd on port 22 on all interfaces. There are also some defaults for local users and services that should be considered.
+
+## Remote listening services
+
+### Disabling sshd
+
+To disable sshd from listening you can stop the socket:
+
+```shell
+systemctl mask sshd.socket --now
+```
+
+If you wish to make further customizations see our [customize sshd guide][sshd-guide].
+
+## Remote non-listening services
+
+### etcd and Locksmith
+
+etcd and Locksmith should be secured and authenticated using TLS if you are using these services. Please see the relevant guides for details.
+
+* [etcd security guide][etcd-sec-guide]
+
+## Local services
+
+### Local users
+
+Flatcar Container Linux has a single default user account called "core". Generally this user is the one that gets ssh keys added to it via a Butane Config for administrators to login. The core user, by default, has access to the wheel group which grants sudo access. The group can't be easily changed and thus the solution to restrict access is to either require a password for sudo but not setting one, or disable login for the `core` user.
+
+A sudo drop-in can be created under `/etc/sudoers.d/core-passwd` with the contents `core ALL=(ALL) ALL` and as long as the core user has no password set it can't use `sudo`. Here is a Butane snippet:
+
+```
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/sudoers.d/core-passwd
+ mode: 0644
+ contents:
+ inline: |
+ core ALL=(ALL) ALL
+```
+
+You can disable the `core` user by setting the login shell to `/sbin/nologin`, here a Butane snippet:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+passwd:
+ users:
+ - name: core
+ shell: /sbin/nologin
+```
+
+### Docker daemon
+
+The docker daemon is accessible via a unix domain socket at `/run/docker.sock`. Users in the "docker" group have access to this service and access to the docker socket grants similar capabilities to sudo. The core user, by default, has access to the docker group. The group can't be easily changed and thus the solution to restrict access is to disable login for the `core` user or restrict the Docker socket permissions.
+
+You can restrict the Docker socket to root by creating a unit drop-in for `docker.socket` in `/etc/systemd/system/docker.socket.d/10-restrict.conf`, here a Butane snippet:
+
+```
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: docker.socket
+ dropins:
+ - name: 10-restrict.conf
+ contents: |
+ [Socket]
+ SocketGroup=root
+```
+
+## Additional hardening
+
+### Disabling Simultaneous Multi-Threading
+
+Recent Intel CPU vulnerabilities cannot be fully mitigated in software without disabling Simultaneous Multi-Threading. This can have a substantial performance impact and is only necessary for certain workloads, so for compatibility reasons, SMT is enabled by default.
+
+The [SMT on Container Linux guide][smt-guide] provides guidance and instructions for disabling SMT.
+
+### Disable USB
+
+If you don't expect to ever use USB, you can disable the kernel module, here a Butane snippet:
+
+```
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/modprobe.d/blacklist.conf
+ mode: 0644
+ contents:
+ inline: |
+ blacklist usb-storage
+```
+
+### SELinux
+
+SELinux is a fine-grained access control mechanism integrated into Flatcar Container Linux. Each container runs in its own independent SELinux context, increasing isolation between containers and providing another layer of protection should a container be compromised.
+
+Flatcar Container Linux implements SELinux, but currently does not enforce SELinux protections by default. The [SELinux on Flatcar Container Linux guide][selinux-guide] covers the process of checking containers for SELinux policy compatibility and switching SELinux into enforcing mode.
+
+[smt-guide]: disabling-smt
+[sshd-guide]: customizing-sshd
+[etcd-sec-guide]: https://etcd.io/docs/v3.4.0/op-guide/security/
+[selinux-guide]: selinux
diff --git a/content/docs/latest/setup/security/selinux.md b/content/docs/latest/setup/security/selinux.md
new file mode 100644
index 00000000..4b8a1817
--- /dev/null
+++ b/content/docs/latest/setup/security/selinux.md
@@ -0,0 +1,55 @@
+---
+title: SELinux on Flatcar Container Linux
+linktitle: SELinux
+description: How to configure, enable or disable SELinux.
+weight: 10
+aliases:
+ - ../../os/selinux
+ - ../../clusters/securing/selinux
+---
+
+SELinux is a fine-grained access control mechanism integrated into Flatcar Container Linux and rkt. Each container runs in its own independent SELinux context, increasing isolation between containers and providing another layer of protection should a container be compromised.
+
+Flatcar Container Linux implements SELinux, but currently does not enforce SELinux protections by default. This allows deployers to verify container operation before enabling SELinux enforcement. This document covers the process of checking containers for SELinux policy compatibility, and switching SELinux into `enforcing` mode.
+
+## Check a container's compatibility with SELinux policy
+
+To verify whether the current SELinux policy would inhibit your containers, enable SELinux logging. In the following set of commands, we delete the rules that suppress this logging by default, and copy the policy store from Flatcar Container Linux's read-only `/usr` to a writable file system location.
+
+```shell
+rm /etc/audit/rules.d/80-selinux.rules
+rm /etc/audit/rules.d/99-default.rules
+rm /etc/selinux/mcs
+cp -a /usr/lib/selinux/mcs /etc/selinux
+rm /var/lib/selinux
+cp -a /usr/lib/selinux/policy /var/lib/selinux
+semodule -DB
+systemctl restart audit-rules
+```
+
+Now run your container. Check the system logs for any messages containing `avc: denied`. Such messages indicate that an `enforcing` SELinux would prevent the container from performing the logged operation. Please open an issue on [github][gh-flatcar], including the full avc log message.
+
+## Enable SELinux enforcement
+
+Once satisfied that your container workload is compatible with the SELinux policy, you can temporarily enable enforcement by running the following command as root:
+
+`$ setenforce 1`
+
+A reboot will reset SELinux to `permissive` mode.
+
+### Make SELinux enforcement permanent
+
+To enable SELinux enforcement across reboots, replace the symbolic link `/etc/selinux/config` with the file it targets, so that the file can be written. You can use the `readlink` command to dereference the link, as shown in the following one-liner:
+
+`$ cp --remove-destination $(readlink -f /etc/selinux/config) /etc/selinux/config`
+
+Now, edit `/etc/selinux/config` to replace `SELINUX=permissive` with `SELINUX=enforcing`.
+
+## Limitations
+
+* SELinux enforcement is currently incompatible with Btrfs volumes and volumes that are shared between multiple containers.
+* Starting from Flannel-0.15 installed via `kube-flannel.yml`, SELinux enforcement will prevent the CNI installation on the host. (See: [flatcar-linux/Flatcar#635][flannel-issue])
+
+
+[gh-flatcar]: https://github.com/flatcar/Flatcar/issues
+[flannel-issue]: https://github.com/flatcar/Flatcar/issues/635
diff --git a/content/docs/latest/setup/security/sssd.md b/content/docs/latest/setup/security/sssd.md
new file mode 100644
index 00000000..2eb88960
--- /dev/null
+++ b/content/docs/latest/setup/security/sssd.md
@@ -0,0 +1,41 @@
+---
+title: Configuring SSSD on Flatcar Container Linux
+linktitle: Configuring SSSD
+description: Using the System Security Service Daemon to integrate with enterprise authentication services.
+weight: 10
+aliases:
+ - ../../os/sssd
+ - ../../clusters/securing/sssd
+---
+
+Flatcar Container Linux ships with the System Security Services Daemon, allowing integration between Flatcar Container Linux and enterprise authentication services.
+
+## Configuring SSSD
+
+Edit /etc/sssd/sssd.conf. This configuration file is fully documented [here](https://jhrozek.fedorapeople.org/sssd/1.13.1/man/sssd.conf.5.html). For example, to configure SSSD to use an IPA server called ipa.example.com, sssd.conf should read:
+
+```ini
+[sssd]
+config_file_version = 2
+services = nss, pam
+domains = LDAP
+[nss]
+[pam]
+[domain/LDAP]
+id_provider = ldap
+auth_provider = ldap
+ldap_schema = ipa
+ldap_uri = ldap://ipa.example.com
+```
+
+## Start SSSD
+
+```shell
+sudo systemctl start sssd
+```
+
+## Make SSSD available on future reboots
+
+```shell
+sudo systemctl enable sssd
+```
diff --git a/content/docs/latest/setup/security/trusted-computing-hardware-requirements.md b/content/docs/latest/setup/security/trusted-computing-hardware-requirements.md
new file mode 100644
index 00000000..3e7213f0
--- /dev/null
+++ b/content/docs/latest/setup/security/trusted-computing-hardware-requirements.md
@@ -0,0 +1,70 @@
+---
+title: Trusted Computing requirements on Flatcar Container Linux
+linktitle: Trusted Computing
+description: How to check for hardware and firmware support for using Trusted Computing.
+weight: 20
+aliases:
+ - ../../os/trusted-computing-hardware-requirements
+ - ../../clusters/securing/trusted-computing-hardware-requirements
+---
+
+Trusted Computing requires support in both system hardware and firmware. This document specifies the required support and explains how to determine if a physical machine has the features needed to enable Trusted Computing in Flatcar Container Linux.
+
+## 1. Check for Trusted Platform Module
+
+Trusted Computing depends on the presence of a Trusted Platform Module (TPM). The TPM is a motherboard component responsible for storing the state of the system boot process, and providing a secure communication channel over which this state can be verified. To check for the presence of a TPM, install the latest Alpha version of Flatcar Container Linux and try to list the TPM device file in the `/sys` system control filesystem:
+
+```shell
+ls /sys/class/tpm/tpm0
+```
+
+If this returns an error, the system either does not have a TPM, or it is not enabled in the system firmware. Firmware configuration varies by system. Consult vendor documentation for details.
+
+## 2. Check TPM version
+
+Version 1.2 TPMs are currently supported. Read the TPM device ID file to discover the TPM version:
+
+```shell
+cat /sys/class/tpm/tpm0/device/id
+```
+
+The contents of the `id` file vary for supported version 1.2 TPMs. It is simplest to check that the file does *not* contain the known string for unsupported version 2.0 TPMs, `MSFT0101`. Almost any other non-zero, non-error output from reading the `id` file indicates a supported version 1.2 TPM.
+
+Support for version 2.0 TPMs identified with the `MSFT0101` string will be added in a future Flatcar Container Linux release.
+
+## 3. Check TPM is enabled and active
+
+The TPM device provides control files in the `/sys` filesystem, as seen above. Read the `enabled` and `active` files to check TPM status:
+
+```shell
+cat /sys/class/tpm/tpm0/device/enabled
+cat /sys/class/tpm/tpm0/device/active
+```
+
+If either of these commands prints "0", reconfigure the TPM by writing a code for TPM activation at the next system boot to the PPI `request` file:
+
+```shell
+echo 6 > /sys/class/tpm/tpm0/device/ppi/request
+```
+
+Reboot the system and check TPM status again, as in Step 3.
+
+## 4. Check boot measurement
+
+The Flatcar Container Linux bootloader will record the state of boot components during the boot process — *measuring* each part, in TPM parlance, and storing the result in its Platform Configuration Registers (PCR). Verify that this measurement has been successful by reading the TPM device's `pcrs` file, a textual representation of the contents of all PCRs:
+
+```shell
+cat /sys/class/tpm/tpm0/device/pcrs
+```
+
+Boot component measurements are recorded in PCRs 9 through 13. These positions in `pcrs` should all contain meaningful values; that is, values that are neither `0`:
+
+`00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00`
+
+nor *max*:
+
+`FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF FF`
+
+## Trusted
+
+A system that passes each of the above tests supports Flatcar Container Linux Trusted Computing and is actively measuring the boot process over the secure TPM channel.
diff --git a/content/docs/latest/setup/storage/_index.md b/content/docs/latest/setup/storage/_index.md
new file mode 100644
index 00000000..49c0b0c6
--- /dev/null
+++ b/content/docs/latest/setup/storage/_index.md
@@ -0,0 +1,8 @@
+---
+title: Managing Storage on Flatcar Container Linux
+linktitle: Storage Setup
+description: Adding, Extending and Configuring Storage on Flatcar.
+weight: 25
+aliases:
+ - ../clusters/scaling/
+---
diff --git a/content/docs/latest/setup/storage/adding-disk-space.md b/content/docs/latest/setup/storage/adding-disk-space.md
new file mode 100644
index 00000000..bd56e8a6
--- /dev/null
+++ b/content/docs/latest/setup/storage/adding-disk-space.md
@@ -0,0 +1,48 @@
+---
+title: Adding disk space to your Flatcar Container Linux machine
+linktitle: Additional disk space
+description: How to increase available disk space, depending on the platform.
+weight: 10
+aliases:
+ - ../../os/adding-disk-space
+ - ../../clusters/scaling/adding-disk-space
+---
+
+On a Flatcar Container Linux machine, the operating system itself is mounted as a read-only partition at `/usr`. The root partition provides read-write storage by default and on a fresh install is mostly blank. The default size of this partition depends on the platform but it is usually between 3GB and 16GB. If more space is required simply extend the virtual machine's disk image and Flatcar Container Linux will fix the partition table and resize the root partition to fill the disk on the next boot.
+
+## Amazon EC2
+
+Amazon doesn't support directly resizing volumes of live machines through the web console, you must either take a snapshot and create a new volume based on that snapshot or use the AWS CLI or other API interface (such as Terraform). Refer to the AWS EC2 documentation on [expanding EBS volumes][ebs-expand-volume] for detailed instructions.
+
+[ebs-expand-volume]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requesting-ebs-volume-modifications.html
+
+## QEMU (qemu-img)
+
+Even if you are not using Qemu itself the qemu-img tool is the easiest to use. It will work on raw, qcow2, vmdk, and most other formats. The command accepts either an absolute size or a relative size by by adding `+` prefix. Unit suffixes such as `G` or `M` are also supported.
+
+```shell
+# Increase the disk size by 5GB
+qemu-img resize flatcar_production_qemu_image.img +5G
+```
+
+## VMware
+
+The interface available for resizing disks in VMware varies depending on the product. See this [Knowledge Base article][vmkb1004047] for details. Most products include a tool called `vmware-vdiskmanager`. The size must be the absolute disk size, relative sizes are not supported so be careful to only increase the size, not shrink it. The unit suffixes `Gb` and `Mb` are supported.
+
+```shell
+# Set the disk size to 20GB
+vmware-vdiskmanager -x 20Gb flatcar_developer_vmware_insecure.vmx
+```
+
+[vmkb1004047]: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004047
+
+## VirtualBox
+
+Use qemu-img or vmware-vdiskmanager as described above. VirtualBox does not support resizing VMDK disk images, only VDI and VHD disks. Meanwhile VirtualBox only supports using VMDK disk images with the OVF config file format used for importing/exporting virtual machines.
+
+If you have have no other options you can try converting the VMDK disk image to a VDI image and configuring a new virtual machine with it:
+
+```shell
+VBoxManage clonehd old.vmdk new.vdi --format VDI
+VBoxManage modifyhd new.vdi --resize 20480
+```
diff --git a/content/docs/latest/setup/storage/adding-swap.md b/content/docs/latest/setup/storage/adding-swap.md
new file mode 100644
index 00000000..cadec354
--- /dev/null
+++ b/content/docs/latest/setup/storage/adding-swap.md
@@ -0,0 +1,178 @@
+---
+title: Managing swap space Flatcar Container Linux
+linktitle: Managing swap space
+description: How to create swapfiles, turn swap on/off, tune swap parameters and debug swap issues.
+weight: 40
+aliases:
+ - ../../os/adding-swap
+ - ../../clusters/management/adding-swap
+---
+
+Swap is the process of moving pages of memory to a designated part of the hard disk, freeing up space when needed. Swap can be used to alleviate problems with low-memory environments.
+An alternative is to use RAM compression with zram.
+
+By default Flatcar Container Linux does not include a partition for swap, however one can configure their system to have swap, either by including a dedicated partition for it or creating a swapfile.
+
+## Managing swap with systemd
+
+systemd provides a specialized `.swap` unit file type which may be used to activate swap. The below example shows how to add a swapfile and activate it using systemd.
+
+### Creating a swapfile
+
+The following commands, run as root, will make a 1GiB file suitable for use as swap.
+
+```shell
+mkdir -p /var/vm
+fallocate -l 1024m /var/vm/swapfile1
+chmod 600 /var/vm/swapfile1
+mkswap /var/vm/swapfile1
+```
+
+### Creating the systemd unit file
+
+The following systemd unit activates the swapfile we created. It should be written to `/etc/systemd/system/var-vm-swapfile1.swap`.
+
+```ini
+[Unit]
+Description=Turn on swap
+
+[Swap]
+What=/var/vm/swapfile1
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### Enable the unit and start using swap
+
+Use `systemctl` to enable the unit once created. The `swappiness` value may be modified if desired.
+
+```shell
+$ systemctl enable --now var-vm-swapfile1.swap
+# Optionally
+$ echo 'vm.swappiness=10' | sudo tee /etc/sysctl.d/80-swappiness.conf
+$ systemctl restart systemd-sysctl
+```
+
+Swap has been enabled and will be started automatically on subsequent reboots. We can verify that the swap is activated by running `swapon`:
+
+```shell
+$ swapon
+NAME TYPE SIZE USED PRIO
+/var/vm/swapfile1 file 1024M 0B -1
+```
+
+## Problems and Considerations
+
+### Btrfs and xfs
+
+Please check the [btrfs instructions](https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#swapfile-support) on how to create swapfiles on btrfs.
+In summary, you must use a single device filesystem, make sure you create the file on a non-snapshotted subvolume
+(e.g., to make sure this is the case you can create a new subvolume for the file), create the file with `truncate -s 0 ./swapfile1`
+and then disable CoW and compression (`chattr +C ./swapfile1`, `btrfs property set ./swapfile1 compression none`).
+
+Swapfiles should not be created on xfs volumes. For systems using xfs, it is recommended to create a dedicated swap partition.
+
+### Partition size
+
+The swapfile cannot be larger than the partition on which it is stored.
+
+### Checking if a system can use a swapfile
+
+Use the `df(1)` command to verify that a partition has the right format and enough available space:
+
+```shell
+$ df -Th
+Filesystem Type Size Used Avail Use% Mounted on
+[...]
+/dev/sdXN ext4 2.0G 3.0M 1.8G 1% /var
+```
+
+The block device mounted at `/var/`, `/dev/sdXN`, is the correct filesystem type and has enough space for a 1GiB swapfile.
+
+## Adding swap with a Butane Config
+
+The following config sets up a 1GiB swapfile located at `/var/vm/swapfile1`.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/sysctl.d/80-swappiness.conf
+ contents:
+ inline: "vm.swappiness=10"
+
+systemd:
+ units:
+ - name: var-vm-swapfile1.swap
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Turn on swap
+ Requires=create-swapfile.service
+ After=create-swapfile.service
+
+ [Swap]
+ What=/var/vm/swapfile1
+
+ [Install]
+ WantedBy=multi-user.target
+ - name: create-swapfile.service
+ contents: |
+ [Unit]
+ Description=Create a swapfile
+ RequiresMountsFor=/var
+ DefaultDependencies=no
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/mkdir -p /var/vm
+ ExecStart=/usr/bin/fallocate -l 1024m /var/vm/swapfile1
+ ExecStart=/usr/bin/chmod 600 /var/vm/swapfile1
+ ExecStart=/usr/sbin/mkswap /var/vm/swapfile1
+ RemainAfterExit=true
+```
+
+## Using a dedicated swap disk
+
+The following Butane config sets up `/dev/sdb` to be used as swap:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ disks:
+ - device: /dev/sdb
+ wipe_table: true
+ partitions:
+ - label: swap
+ type_guid: 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F
+ filesystems:
+ - device: /dev/disk/by-partlabel/swap
+ format: swap
+ wipe_filesystem: true
+ label: swap
+ with_mount_unit: true
+```
+
+NB the systemd unit name is created by
+`systemd-escape -p /dev/disk/by-partlabel/swap` as systemd uses - as the
+path separator meaning that paths containing - have to be escaped. This
+leads to a file `'dev-disk-by\x2dpartlabel-swap.swap'` being created in
+`/etc/systemd/system`.
+
+## Using zram
+
+With zram a virtual `/dev/zram0` device acts as swap space which lives compressed in memory.
+At the moment there is no zram generator and instead, a manual setup needs to be done, similar to the creation of a swap file.
+
+```shell
+$ sudo modprobe zram
+$ sudo zramctl -f -s 1G
+$ sudo mkswap /dev/zram0
+$ sudo swapon /dev/zram0
+$ zramctl
+NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
+/dev/zram0 lzo-rle 1G 4K 74B 12K 8 [SWAP]
+```
diff --git a/content/docs/latest/setup/storage/iscsi.md b/content/docs/latest/setup/storage/iscsi.md
new file mode 100644
index 00000000..3043ea2e
--- /dev/null
+++ b/content/docs/latest/setup/storage/iscsi.md
@@ -0,0 +1,133 @@
+---
+title: Configuring iSCSI on Flatcar Container Linux
+linktitle: Configuring iSCSI
+description: How to configure the iSCSI daemon, either manually or automatically.
+weight: 30
+aliases:
+ - ../../os/iscsi
+ - ../../clusters/management/iscsi
+---
+
+[iSCSI][iscsi-wiki] is a protocol which provides block-level access to storage devices over IP.
+This allows applications to treat remote storage devices as if they were local disks.
+iSCSI handles taking requests from clients and carrying them out on the remote SCSI devices.
+
+Flatcar Container Linux has integrated support for mounting devices.
+This guide covers iSCSI configuration manually or automatically with [Butane Configs][butane-configs].
+
+## Manual iSCSI configuration
+
+### Set the Flatcar Container Linux iSCSI initiator name
+
+iSCSI clients each have a unique initiator name.
+Flatcar Container Linux generates a unique initiator name on each install and stores it in `/etc/iscsi/initiatorname.iscsi`.
+This may be replaced if necessary.
+
+### Configure the global iSCSI credentials
+
+If all iSCSI mounts on a Flatcar Container Linux system use the same credentials, these may be configured locally by editing `/etc/iscsi/iscsid.conf` and setting the `node.session.auth.username` and `node.session.auth.password` fields.
+If the iSCSI target is configured to support mutual authentication (allowing the initiator to verify that it is speaking to the correct client), these should be set in `node.session.auth.username_in` and `node.session.auth.password_in`.
+
+### Start the iSCSI daemon
+
+```shell
+systemctl start iscsid
+```
+
+### Discover available iSCSI targets
+
+To discover targets, run:
+
+```shell
+iscsiadm -m discovery -t sendtargets -p target_ip:target_port
+```
+
+### Provide target-specific credentials
+
+For each unique `--targetname`, first enter the username:
+
+```shell
+iscsiadm -m node \
+ --targetname=custom_target \
+ --op update \
+ --name=node.session.auth.username \
+ --value=my_username
+```
+
+And then the password:
+
+```shell
+iscsiadm -m node \
+ --targetname=custom_target \
+ --op update \
+ --name=node.session.auth.password \
+ --value=my_secret_passphrase
+```
+
+### Log into an iSCSI target
+
+The following command will log into all discovered targets.
+
+```shell
+iscsiadm -m node --login
+```
+
+Then, to log into a specific target use:
+
+```shell
+iscsiadm -m node --targetname=custom_target --login
+```
+
+### Enable automatic iSCSI login at boot
+
+If you want to connect to iSCSI targets automatically at boot you first need to enable the systemd service:
+
+```shell
+systemctl enable iscsi
+```
+
+## Automatic iSCSI configuration
+
+To configure and start iSCSI automatically after a machine is provisioned, credentials need to be written to disk and the iSCSI service started.
+
+A Butane Config will be used to write the file `/etc/iscsi/iscsid.conf` to disk:
+
+```ini
+isns.address = host_ip
+isns.port = host_port
+node.session.auth.username = my_username
+node.session.auth.password = my_secret_password
+discovery.sendtargets.auth.username = my_username
+discovery.sendtargets.auth.password = my_secret_password
+```
+
+### The Butane Config
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: iscsi.service
+ enabled: true
+storage:
+ files:
+ - path: /etc/iscsi/iscsid.conf
+ mode: 0644
+ contents:
+ inline: |
+ isns.address = host_ip
+ isns.port = host_port
+ node.session.auth.username = my_username
+ node.session.auth.password = my_secret_password
+ discovery.sendtargets.auth.username = my_username
+ discovery.sendtargets.auth.password = my_secret_password
+```
+
+## Mounting iSCSI targets
+
+See the [mounting storage docs][mounting-storage] for an example.
+
+[iscsi-wiki]: https://en.wikipedia.org/wiki/ISCSI
+[mounting-storage]: mounting-storage
+[butane-configs]: ../../provisioning/config-transpiler
diff --git a/content/docs/latest/setup/storage/mounting-storage.md b/content/docs/latest/setup/storage/mounting-storage.md
new file mode 100644
index 00000000..50bb1599
--- /dev/null
+++ b/content/docs/latest/setup/storage/mounting-storage.md
@@ -0,0 +1,155 @@
+---
+title: Mounting storage
+description: How to format and attach additional storage devices.
+weight: 10
+aliases:
+ - ../../os/mounting-storage
+ - ../../clusters/scaling/mounting-storage
+---
+
+Butane Configs can be used to format and attach additional filesystems to Flatcar Container Linux nodes, whether such storage is provided by an underlying cloud platform, physical disk, SAN, or NAS system. This is done by specifying how partitions should be mounted in the config, and then using a _systemd mount unit_ to mount the partition. By [systemd convention][systemd-mount-man], mount unit names derive from the target mount point, with interior slashes replaced by dashes, and the `.mount` extension appended. A unit mounting onto `/var/www` is thus named `var-www.mount`.
+
+Mount units name the source filesystem and target mount point, and optionally the filesystem type. *Systemd* mounts filesystems defined in such units at boot time. The following example formats an [EC2 ephemeral disk][ec2-disk] and then mounts it at the node's `/media/ephemeral` directory. The mount unit is therefore named `media-ephemeral.mount`.
+
+Note that you should not use the direct path `/dev/sdX`for the `What=` path but **use a distinct stable identifier** such as `/dev/disk/by-label/X` or `/dev/disk/by-partlabel/X` because, e.g., `/dev/sda` can become `/dev/sdb` after reboot as the Linux kernel assigns the devices in the order they appear which can be unstable. The best idea is to match the disk based on the content you expect, such as a filesystem or partition label that you set up through formatting the disk on first boot via Ignition's `mount:` directive. This way you can use `/dev/sdX` as the `mount: device:` path which is only used on the first boot and don't have to care whether it will get different names after reboot because your mount unit uses `/dev/disk/by-label/` to find the correct disk. If that is not possible you can try your luck with `/dev/disk/by-path/X` entries that depend on the way the disk are attached to the machine but not on the discovery order of the Linux kernel.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: /dev/xvdb
+ format: ext4
+ wipe_filesystem: true
+ label: ephemeral1
+systemd:
+ units:
+ - name: media-ephemeral.mount
+ enabled: true
+ contents: |
+ [Unit]
+ Before=local-fs.target
+ [Mount]
+ What=/dev/disk/by-label/ephemeral1
+ Where=/media/ephemeral
+ Type=ext4
+ [Install]
+ WantedBy=local-fs.target
+```
+
+## Use attached storage for Docker
+
+Docker containers can be very large and debugging a build process makes it easy to accumulate hundreds of containers. It's advantageous to use attached storage to expand your capacity for container images. Be aware that some cloud providers treat certain disks as ephemeral and you will lose all Docker images contained on that disk.
+
+We're going to format a device as ext4 and then mount it to `/var/lib/docker`, where Docker stores images. Be sure to hardcode the correct device or look for a device by label:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ filesystems:
+ - device: /dev/xvdb
+ format: ext4
+ wipe_filesystem: true
+ label: ephemeral1
+systemd:
+ units:
+ - name: var-lib-docker.mount
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Mount ephemeral to /var/lib/docker
+ Before=local-fs.target
+ [Mount]
+ What=/dev/disk/by-label/ephemeral1
+ Where=/var/lib/docker
+ Type=ext4
+ [Install]
+ WantedBy=local-fs.target
+ - name: docker.service
+ dropins:
+ - name: 10-wait-docker.conf
+ contents: |
+ [Unit]
+ After=var-lib-docker.mount
+ Requires=var-lib-docker.mount
+```
+
+## Creating and mounting a btrfs volume file
+
+Flatcar Container Linux uses ext4 + overlayfs to provide a layered filesystem for the root partition. If you'd like to use btrfs for your Docker containers, you can do so with two systemd units: one that creates and formats a btrfs volume file and another that mounts it.
+
+In this example, we are going to mount a new 25GB btrfs volume file to `/var/lib/docker`. One can verify that Docker is using the btrfs storage driver once the Docker service has started by executing `sudo docker info`. We recommend allocating **no more than 85%** of the available disk space for a btrfs filesystem as journald will also require space on the host filesystem.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: format-var-lib-docker.service
+ contents: |
+ [Unit]
+ Before=docker.service var-lib-docker.mount
+ RequiresMountsFor=/var/lib
+ ConditionPathExists=!/var/lib/docker.btrfs
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/truncate --size=25G /var/lib/docker.btrfs
+ ExecStart=/usr/sbin/mkfs.btrfs /var/lib/docker.btrfs
+ - name: var-lib-docker.mount
+ enabled: true
+ contents: |
+ [Unit]
+ Before=docker.service
+ After=format-var-lib-docker.service
+ Requires=format-var-lib-docker.service
+ [Mount]
+ What=/var/lib/docker.btrfs
+ Where=/var/lib/docker
+ Type=btrfs
+ Options=loop,discard
+ [Install]
+ RequiredBy=docker.service
+```
+
+Note the declaration of `ConditionPathExists=!/var/lib/docker.btrfs`. Without this line, systemd would reformat the btrfs filesystem every time the machine starts.
+
+## Mounting NFS exports
+
+This Butane Config excerpt mounts an NFS export onto the Flatcar Container Linux node's `/var/www`.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: var-www.mount
+ enabled: true
+ contents: |
+ [Unit]
+ Before=remote-fs.target
+ [Mount]
+ What=nfs.example.com:/var/www
+ Where=/var/www
+ Type=nfs
+ [Install]
+ WantedBy=remote-fs.target
+```
+
+To declare that another service depends on this mount, name the mount unit in the dependent unit's `After` and `Requires` properties:
+
+```yaml
+[Unit]
+After=var-www.mount
+Requires=var-www.mount
+```
+
+If the mount fails, dependent units will not start.
+
+## Further reading
+
+Check the [`systemd mount` docs][systemd-mount-man] to learn about the available options. Examples specific to [EC2][ec2-disk], [Google Compute Engine][gcp-disk] can be used as a starting point.
+
+[ec2-disk]: ../../installing/cloud/aws-ec2#instance-storage
+[gcp-disk]: ../../installing/cloud/gcp#additional-storage
+[systemd-mount-man]: http://www.freedesktop.org/software/systemd/man/systemd.mount.html
diff --git a/content/docs/latest/setup/storage/raid.md b/content/docs/latest/setup/storage/raid.md
new file mode 100644
index 00000000..3ad21d96
--- /dev/null
+++ b/content/docs/latest/setup/storage/raid.md
@@ -0,0 +1,70 @@
+---
+title: Configuring RAID on Flatcar Container Linux
+linktitle: Configuring RAID
+weight: 10
+aliases:
+ - ../../os/root-filesystem-placement
+ - ../../bare-metal/root-filesystem-placement
+---
+
+Flatcar Container Linux supports composite disk devices such as RAID arrays. If the root filesystem is placed on a composite device, special care must be taken to ensure Flatcar Container Linux can find and mount the filesystem early in the boot process. GPT partition entries have a [partition type GUID](https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs) that specifies what type of partition it is (e.g. Linux filesystem); Flatcar Container Linux uses special type GUIDs to indicate that a partition is a component of a composite device containing the root filesystem.
+
+## Root on RAID
+
+RAID enables multiple disks to be combined into a single logical disk to increase reliability and performance. To create a software RAID array when provisioning a Flatcar Container Linux system, use the `storage.raid` section of [Butane Config][butane-configs]. RAID components containing the root filesystem must have the type GUID `be9067b9-ea49-4f15-b4f6-f36f8c9e1818`. All other RAID arrays must not have that GUID; the Linux RAID partition GUID `a19d880f-05fc-4d3b-a006-743f0f84911e` is recommended instead. See the [Ignition documentation](https://coreos.github.io/ignition/examples/#create-a-raid-enabled-data-volume) for more information on setting up RAID for data volumes.
+
+### Overview
+
+To place the root filesystem on a RAID array:
+
+* Create the component partitions used in the RAID array with the type GUID `be9067b9-ea49-4f15-b4f6-f36f8c9e1818`.
+* Create a RAID array from the component partitions.
+* Create a filesystem labeled `ROOT` on the RAID array.
+* Remove the `ROOT` label from the original root filesystem.
+
+### Example Butane Config
+
+This Butane Config creates partitions on `/dev/vdb` and `/dev/vdc` that fill each disk, creates a RAID array named `root_array` from those partitions, and finally creates the root filesystem on the array. To prevent inadvertent booting from the [original root filesystem][partition-table], `/dev/vda9` is reformatted with a blank ext4 filesystem labeled `unused`.
+
+**Warning**: This will erase both `/dev/vdb` and `/dev/vdc`.
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ disks:
+ - device: /dev/vdb
+ wipe_table: true
+ partitions:
+ - label: root1
+ type_guid: be9067b9-ea49-4f15-b4f6-f36f8c9e1818
+ - device: /dev/vdc
+ wipe_table: true
+ partitions:
+ - label: root2
+ type_guid: be9067b9-ea49-4f15-b4f6-f36f8c9e1818
+ raid:
+ - name: "root_array"
+ level: "raid1"
+ devices:
+ - "/dev/vdb1"
+ - "/dev/vdc1"
+ filesystems:
+ - device: "/dev/md/root_array"
+ format: "ext4"
+ label: "ROOT"
+ - device: "/dev/vda9"
+ format: "ext4"
+ wipe_filesystem: true
+ label: "unused"
+```
+
+### Limitations
+
+* Other system partitions, such as `USR-A`, `USR-B`, `OEM`, and `EFI-SYSTEM`, cannot be placed on a software RAID array.
+* RAID components containing the root filesystem must be partitions on a GPT-partitioned device, not whole-disk devices or partitions on an MBR-partitioned disk.
+* `/etc/mdadm.conf` cannot be used to configure a RAID array containing the root filesystem.
+* Since Ignition cannot modify the type GUID of existing partitions, the default `ROOT` partition cannot be reused as a component of a RAID array. A future version of Ignition will support resizing the `ROOT` partition and changing its type GUID, allowing it to be used as part of a RAID array.
+
+[butane-configs]: ../../provisioning/config-transpiler
+[partition-table]: ../../reference/developer-guides/sdk-disk-partitions/#partition-table
diff --git a/content/docs/latest/setup/systemd/_index.md b/content/docs/latest/setup/systemd/_index.md
new file mode 100644
index 00000000..dd07e3b3
--- /dev/null
+++ b/content/docs/latest/setup/systemd/_index.md
@@ -0,0 +1,10 @@
+---
+title: Using Systemd on Flatcar Container Linux
+linktitle: Using Systemd
+description: >
+ Flatcar makes heavy use of systemd services and customization techniques.
+ Understanding core concepts like drop-in units, timers and embedding
+ environment variables, make it easier to apply the right changes to an
+ instance being deployed. These guides can help you get up to speed.
+weight: 5
+---
diff --git a/content/docs/latest/setup/systemd/drop-in-units.md b/content/docs/latest/setup/systemd/drop-in-units.md
new file mode 100644
index 00000000..8c78f815
--- /dev/null
+++ b/content/docs/latest/setup/systemd/drop-in-units.md
@@ -0,0 +1,166 @@
+---
+title: Using systemd drop-in units
+linktitle: Drop-In Units
+description: How to customize the running system by using drop-in units.
+weight: 20
+aliases:
+ - ../../os/using-systemd-drop-in-units
+ - ../../clusters/customization/using-systemd-drop-in-units
+---
+
+There are two methods of overriding default Flatcar Container Linux settings in unit files: copying the unit file from `/usr/lib64/systemd/system` to `/etc/systemd/system` and modifying the chosen settings. Alternatively, one can create a directory named `unit.d` within `/etc/systemd/system` and place a drop-in file `name.conf` there that only changes the specific settings one is interested in. Note that multiple such drop-in files are read if present.
+
+The advantage of the first method is that one easily overrides the complete unit, the default Flatcar Container Linux unit is not parsed at all anymore. It has the disadvantage that improvements to the unit file supplied by Flatcar Container Linux are not automatically incorporated on updates.
+
+The advantage of the second method is that one only overrides the settings one specifically wants, where updates to the original Flatcar Container Linux unit automatically apply. This has the disadvantage that some future Flatcar Container Linux updates might be incompatible with the local changes, but the risk is much lower.
+
+Note that for drop-in files, if one wants to remove entries from a setting that is parsed as a list (and is not a dependency), such as `ConditionPathExists=` (or e.g. `ExecStart=` in service units), one needs to first clear the list before re-adding all entries except the one that is to be removed. See below for an example.
+
+This also applies for user instances of systemd, but with different locations for the unit files. See the section on unit load paths in [official systemd doc](http://www.freedesktop.org/software/systemd/man/systemd.unit.html) for further details.
+
+## Example: customizing locksmithd.service
+
+Let's review `/usr/lib64/systemd/system/locksmithd.service` unit (you can find it using this command: `systemctl list-units | grep locksmithd`) with the following contents:
+
+```ini
+[Unit]
+Description=Cluster reboot manager
+After=update-engine.service
+ConditionVirtualization=!container
+ConditionPathExists=!/usr/.noupdate
+
+[Service]
+CPUShares=16
+MemoryLimit=32M
+PrivateDevices=true
+Environment=GOMAXPROCS=1
+EnvironmentFile=-/usr/share/flatcar/update.conf
+EnvironmentFile=-/etc/flatcar/update.conf
+ExecStart=/usr/lib/locksmith/locksmithd
+Restart=on-failure
+RestartSec=10s
+
+[Install]
+WantedBy=multi-user.target
+```
+
+Let's walk through increasing the `RestartSec` parameter via both methods:
+
+### Override only specific option
+
+You can create a drop-in file `/etc/systemd/system/locksmithd.service.d/10-restart_60s.conf` with the following contents:
+
+```ini
+[Service]
+RestartSec=60s
+```
+
+Then reload systemd, scanning for new or changed units:
+
+```shell
+systemctl daemon-reload
+
+```
+
+And restart modified service if necessary (in our example we have changed only `RestartSec` option, but if you want to change environment variables, `ExecStart` or other run options you have to restart service):
+
+```shell
+systemctl restart locksmithd.service
+```
+
+Here is how that could be implemented within a Butane Config:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: locksmithd.service
+ enabled: true
+ dropins:
+ - name: 10-restart_60s.conf
+ contents: |
+ [Service]
+ RestartSec=60s
+```
+
+This change is small and targeted. It is the easiest way to tweak unit's parameters.
+
+### Override the whole unit file
+
+Another way is to override whole systemd unit. Copy default unit file `/usr/lib64/systemd/system/locksmithd.service` to `/etc/systemd/system/locksmithd.service` and change the chosen settings:
+
+```ini
+[Unit]
+Description=Cluster reboot manager
+After=update-engine.service
+ConditionVirtualization=!container
+ConditionPathExists=!/usr/.noupdate
+
+[Service]
+CPUShares=16
+MemoryLimit=32M
+PrivateDevices=true
+Environment=GOMAXPROCS=1
+EnvironmentFile=-/usr/share/flatcar/update.conf
+EnvironmentFile=-/etc/flatcar/update.conf
+ExecStart=/usr/lib/locksmith/locksmithd
+Restart=on-failure
+RestartSec=60s
+
+[Install]
+WantedBy=multi-user.target
+```
+
+Butane Config example:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: locksmithd.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Cluster reboot manager
+ After=update-engine.service
+ ConditionVirtualization=!container
+ ConditionPathExists=!/usr/.noupdate
+
+ [Service]
+ CPUShares=16
+ MemoryLimit=32M
+ PrivateDevices=true
+ Environment=GOMAXPROCS=1
+ EnvironmentFile=-/usr/share/flatcar/update.conf
+ EnvironmentFile=-/etc/flatcar/update.conf
+ ExecStart=/usr/lib/locksmith/locksmithd
+ Restart=on-failure
+ RestartSec=60s
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+### List drop-ins
+
+To see all runtime drop-in changes for system units run the command below:
+
+```shell
+systemd-delta --type=extended
+```
+
+## Other systemd examples
+
+For more examples using systemd customization, check out these documents:
+
+ * [Customizing Docker](../../container-runtimes/customizing-docker#using-a-dockercfg-file-for-authentication)
+ * [Customizing the SSH Daemon](../security/customizing-sshd#changing-the-sshd-port)
+ * [Using Environment Variables in systemd Units](using-environment-variables-in-systemd-units)
+
+## More Information
+
+systemd.service Docs
+systemd.unit Docs
+systemd.target Docs
diff --git a/content/docs/latest/setup/systemd/environment-variables.md b/content/docs/latest/setup/systemd/environment-variables.md
new file mode 100644
index 00000000..7d24dcf3
--- /dev/null
+++ b/content/docs/latest/setup/systemd/environment-variables.md
@@ -0,0 +1,141 @@
+---
+title: Using environment variables in systemd units
+linktitle: Environment Variables
+description: How to configure and use environment variables in systemd units.
+weight: 30
+aliases:
+ - ../../os/using-environment-variables-in-systemd-units
+ - ../../clusters/customization/using-environment-variables-in-systemd-units
+---
+
+## Environment directive
+
+systemd has an Environment directive which sets environment variables for executed processes. It takes a space-separated list of variable assignments. This option may be specified more than once in which case all listed variables will be set. If the same variable is set twice, the later setting will override the earlier setting. If the empty string is assigned to this option, the list of environment variables is reset, all prior assignments have no effect. Environments directives are used in built-in Flatcar Container Linux systemd units, for example in etcd2 and flannel.
+
+With the example below, you can configure your etcd2 daemon to use encryption. Just create `/etc/systemd/system/etcd2.service.d/30-certificates.conf` [drop-in] for etcd2.service:
+
+```ini
+[Service]
+# Client Env Vars
+Environment=ETCD_CA_FILE=/path/to/CA.pem
+Environment=ETCD_CERT_FILE=/path/to/server.crt
+Environment=ETCD_KEY_FILE=/path/to/server.key
+# Peer Env Vars
+Environment=ETCD_PEER_CA_FILE=/path/to/CA.pem
+Environment=ETCD_PEER_CERT_FILE=/path/to/peers.crt
+Environment=ETCD_PEER_KEY_FILE=/path/to/peers.key
+```
+
+Then run `sudo systemctl daemon-reload` and `sudo systemctl restart etcd2.service` to apply new environments to etcd2 daemon. You can read more about etcd2 certificates [here][customizing-etcd].
+
+## EnvironmentFile directive
+
+EnvironmentFile similar to Environment directive but reads the environment variables from a text file. The text file should contain new-line-separated variable assignments.
+
+For example, in Flatcar Container Linux, the `coreos-metadata.service` service creates `/run/metadata/coreos`. This environment file can be included by other services in order to inject dynamic configuration. Here's an example of the environment file when run on DigitalOcean (the IP addresses have been removed):
+
+```shell
+COREOS_DIGITALOCEAN_IPV4_ANCHOR_0=X.X.X.X
+COREOS_DIGITALOCEAN_IPV4_PRIVATE_0=X.X.X.X
+COREOS_DIGITALOCEAN_HOSTNAME=test.example.com
+COREOS_DIGITALOCEAN_IPV4_PUBLIC_0=X.X.X.X
+COREOS_DIGITALOCEAN_IPV6_PUBLIC_0=X:X:X:X:X:X:X:X
+```
+
+This environment file can then be sourced and its variables used. Here is an example drop-in for `etcd-member.service` which starts `coreos-metadata.service` and then uses the generated results:
+
+```ini
+[Unit]
+Requires=coreos-metadata.service
+After=coreos-metadata.service
+
+[Service]
+EnvironmentFile=/run/metadata/coreos
+ExecStart=
+ExecStart=/usr/bin/etcd2 \
+ --advertise-client-urls=http://${COREOS_DIGITALOCEAN_IPV4_PUBLIC_0}:2379 \
+ --initial-advertise-peer-urls=http://${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0}:2380 \
+ --listen-client-urls=http://0.0.0.0:2379 \
+ --listen-peer-urls=http://${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0}:2380 \
+ --initial-cluster=%m=http://${COREOS_DIGITALOCEAN_IPV4_PRIVATE_0}:2380
+```
+
+## Other examples
+
+### Use host IP addresses and EnvironmentFile
+
+You can also write your host IP addresses into `/etc/network-environment` file using [this](https://github.com/kelseyhightower/setup-network-environment) utility. Then you can run your Docker containers following way:
+
+```ini
+[Unit]
+Description=Nginx service
+Requires=etcd2.service
+After=etcd2.service
+[Service]
+# Get network environmental variables
+EnvironmentFile=/etc/network-environment
+ExecStartPre=-/usr/bin/docker kill nginx
+ExecStartPre=-/usr/bin/docker rm nginx
+ExecStartPre=/usr/bin/docker pull nginx
+ExecStartPre=/usr/bin/etcdctl set /services/nginx '{"host": "%H", "ipv4_addr": ${DEFAULT_IPV4}, "port": 80}'
+ExecStart=/usr/bin/docker run --rm --name nginx -p ${DEFAULT_IPV4}:80:80 nginx
+ExecStop=/usr/bin/docker stop nginx
+ExecStopPost=/usr/bin/etcdctl rm /services/nginx
+```
+
+This unit file will run nginx Docker container and bind it to specific IP address and port.
+
+### System wide environment variables
+
+You can define system wide environment variables using a [Butane Config][butane-configs] as explained below:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/systemd/system.conf.d/10-default-env.conf
+ mode: 0644
+ contents:
+ inline: |
+ [Manager]
+ DefaultEnvironment=HTTP_PROXY=http://192.168.0.1:3128
+ - path: /etc/profile.env
+ mode: 0644
+ contents:
+ inline: |
+ export HTTP_PROXY=http://192.168.0.1:3128
+```
+
+Where:
+
+* `/etc/systemd/system.conf.d/10-default-env.conf` config file will set default environment variables for all systemd units.
+* `/etc/profile.env` will set environment variables for all users logged in Flatcar Container Linux.
+
+### etcd2.service unit advanced example
+
+A [complete example][etcd-cluster-reconfiguration] of combining environment variables and systemd [drop-ins][drop-in] to reconfigure an existing machine running etcd.
+
+## More systemd examples
+
+For more systemd examples, check out these documents:
+
+ * [Customizing Docker][customizing-docker]
+ * [Customizing the SSH Daemon][customizing-sshd]
+ * [Using systemd Drop-In Units][drop-in]
+ * [etcd Cluster Runtime Reconfiguration on Flatcar Container Linux][etcd-cluster-reconfiguration]
+
+[drop-in]: drop-in-units
+[customizing-sshd]: ../security/customizing-sshd#changing-the-sshd-port
+[customizing-etcd]: ../customization/customize-etcd-unit
+[customizing-docker]: ../../container-runtimes/customizing-docker#using-a-dockercfg-file-for-authentication
+[butane-configs]: ../../provisioning/config-transpiler
+[etcd-discovery]: ../clusters/cluster-discovery
+[systemd-udev]: udev-rules
+[etcd-cluster-reconfiguration]: https://github.com/coreos/docs/blob/master/etcd/etcd-live-cluster-reconfiguration.md
+
+## More Information
+
+systemd.exec Docs
+systemd.service Docs
+systemd.unit Docs
diff --git a/content/docs/latest/setup/systemd/getting-started.md b/content/docs/latest/setup/systemd/getting-started.md
new file mode 100644
index 00000000..10636bff
--- /dev/null
+++ b/content/docs/latest/setup/systemd/getting-started.md
@@ -0,0 +1,182 @@
+---
+title: Getting started with systemd
+linktitle: Getting Started
+description: An introduction to the most important systemd concepts used in Flatcar.
+weight: 10
+aliases:
+ - ../../os/getting-started-with-systemd
+ - ../../clusters/management/getting-started-with-systemd
+---
+
+systemd is an init system that provides many powerful features for starting, stopping, and managing processes. Within Flatcar Container Linux, you will almost exclusively use systemd to manage the lifecycle of your Docker containers.
+
+## Terminology
+
+systemd consists of two main concepts: a unit and a target. A unit is a configuration file that describes the properties of the process that you'd like to run. This is normally a `docker run` command or something similar. A target is a grouping mechanism that allows systemd to start up groups of processes at the same time. This happens at every boot as processes are started at different run levels.
+
+systemd is the first process started on Flatcar Container Linux and it reads different targets and starts the processes specified which allows the operating system to start. The target that you'll interact with is the `multi-user.target` which holds all of the general use unit files for our containers.
+
+Each target is actually a collection of symlinks to our unit files. This is specified in the unit file by `WantedBy=multi-user.target`. Running `systemctl enable foo.service` creates symlinks to the unit inside `multi-user.target.wants`.
+
+## Unit file
+
+On Flatcar Container Linux, unit files are located at `/etc/systemd/system`. Let's create a simple unit named `hello.service`:
+
+```ini
+[Unit]
+Description=MyApp
+After=docker.service
+Requires=docker.service
+
+[Service]
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker rm --force busybox1
+ExecStart=/usr/bin/docker run --name busybox1 --pull always busybox /bin/sh -c "trap 'exit 0' INT TERM; while true; do echo Hello World; sleep 1; done"
+ExecStop=/usr/bin/docker stop busybox1
+Restart=always
+RestartSec=5s
+
+[Install]
+WantedBy=multi-user.target
+```
+
+The `Description` shows up in the systemd log and a few other places. Write something that will help you understand exactly what this does later on.
+
+`After=docker.service` and `Requires=docker.service` means this unit will only start after `docker.service` is active. You can define as many of these as you want.
+
+`ExecStartPre=` is the action to run before starting the main process, using the `-` prefix you can ignore failures.
+`ExecStart=` allows you to specify any command that you'd like to run when this unit is started. The pid assigned to this process is what systemd will monitor to determine whether the process has crashed or not. Do not run docker containers with `-d` as this will prevent the container from starting as a child of this pid. systemd will think the process has exited and the unit will be stopped.
+`ExecStop=` is the action systemd will run when the unit should be stopped.
+
+`WantedBy=` is the target that this unit is a part of.
+
+To start a new unit, we need to tell systemd to create the symlink and then start the file:
+
+```shell
+sudo systemctl enable /etc/systemd/system/hello.service
+sudo systemctl start hello.service
+```
+
+To verify the unit started, you can see the list of containers running with `docker ps` and read the unit's output with `journalctl`:
+
+```shell
+$ journalctl -f -u hello.service
+-- Logs begin at Fri 2014-02-07 00:05:55 UTC. --
+Feb 11 17:46:26 localhost docker[23470]: Hello World
+Feb 11 17:46:27 localhost docker[23470]: Hello World
+Feb 11 17:46:28 localhost docker[23470]: Hello World
+...
+```
+
+- [Overview of systemctl](systemctl)
+- [Reading the System Log](../debug/reading-the-system-log)
+
+## Advanced unit files
+
+systemd provides a high degree of functionality in your unit files. Here's a curated list of useful features listed in the order they'll occur in the lifecycle of a unit:
+
+| Name | Description |
+|---------|-------------|
+| ExecStartPre | Commands that will run before `ExecStart`. |
+| ExecStart | Main commands to run for this unit. |
+| ExecStartPost | Commands that will run after all `ExecStart` commands have completed. |
+| ExecReload | Commands that will run when this unit is reloaded via `systemctl reload foo.service` |
+| ExecStop | Commands that will run when this unit is considered failed or if it is stopped via `systemctl stop foo.service` |
+| ExecStopPost | Commands that will run after `ExecStop` has completed. |
+| RestartSec | The amount of time to sleep before restarting a service. Useful to prevent your failed service from attempting to restart itself every 100ms. |
+
+The full list is located on the [systemd man page](http://www.freedesktop.org/software/systemd/man/systemd.service.html).
+
+Let's put a few of these concepts together to register new units within etcd. Imagine we had another container running that would read these values from etcd and act upon them.
+
+We can use `ExecStartPre` to scrub existing container state. The `docker kill` will force any previous copy of this container to stop, which is useful if we restarted the unit but Docker didn't stop the container for some reason. The `=-` is systemd syntax to ignore errors for this command. We need to do this because Docker will return a non-zero exit code if we try to stop a container that doesn't exist. We don't consider this an error (because we want the container stopped) so we tell systemd to ignore the possible failure.
+
+`docker rm` will remove the container and `docker pull` will pull down the latest version. You can optionally pull down a specific version as a Docker tag: `docker.io/nginx:1.25`
+
+`ExecStart` is where the container is started from the container image that we pulled above.
+
+Since our container will be started in `ExecStart`, it makes sense for our etcd command to run as `ExecStartPost` to ensure that our container is started and functioning.
+
+When the service is told to stop, we need to stop the Docker container using its `--name` from the run command. We also need to clean up our etcd key when the container exits or the unit is failed by using `ExecStopPost`.
+
+```ini
+[Unit]
+Description=My Advanced Service
+After=etcd2.service
+After=docker.service
+
+[Service]
+TimeoutStartSec=0
+ExecStartPre=-/usr/bin/docker kill nginx
+ExecStartPre=-/usr/bin/docker rm nginx
+ExecStartPre=/usr/bin/docker pull docker.io/nginx
+ExecStart=/usr/bin/docker run --name nginx -p 8081:80 docker.io/nginx
+ExecStartPost=/usr/bin/etcdctl set /domains/example.com/10.10.10.123:8081 running
+ExecStop=/usr/bin/docker stop nginx
+ExecStopPost=/usr/bin/etcdctl rm /domains/example.com/10.10.10.123:8081
+
+[Install]
+WantedBy=multi-user.target
+```
+
+While it's possible to manage the starting, stopping, and removal of the container in a single `ExecStart` command by using `docker run --rm`, it's a good idea to separate the container's lifecycle into `ExecStartPre`, `ExecStart`, and `ExecStop` options as we've done above. This gives you a chance to inspect the container's state after it stops or fails.
+
+## Unit specifiers
+
+In our last example we had to hardcode our IP address when we announced our container in etcd. That's not scalable and systemd has a few variables built in to help us out. Here's a few of the most useful:
+
+| Variable | Meaning | Description |
+|----------|---------|-------------|
+| `%n` | Full unit name | Useful if the name of your unit is unique enough to be used as an argument on a command. |
+| `%m` | Machine ID | Useful for namespacing etcd keys by machine. Example: `/machines/%m/units` |
+| `%b` | BootID | Similar to the machine ID, but this value is random and changes on each boot |
+| `%H` | Hostname | Allows you to run the same unit file across many machines. Useful for service discovery. Example: `/domains/example.com/%H:8081` |
+
+A full list of specifiers can be found on the [systemd man page](http://www.freedesktop.org/software/systemd/man/systemd.unit.html#Specifiers).
+
+## Instantiated units
+
+Since systemd is based on symlinks, there are a few interesting tricks you can leverage that are very powerful when used with containers. If you create multiple symlinks to the same unit file, the following variables become available to you:
+
+| Variable | Meaning | Description |
+|----------|---------|-------------|
+| `%p` | Prefix name | Refers to any string before `@` in your unit name. |
+| `%i` | Instance name | Refers to the string between the `@` and the suffix. |
+
+In our earlier example we had to hardcode our IP address when registering within etcd:
+
+```ini
+ExecStartPost=/usr/bin/etcdctl set /domains/example.com/10.10.10.123:8081 running
+```
+
+We can enhance this by using `%H` and `%i` to dynamically announce the hostname and port. Specify the port after the `@` by using two unit files named `foo@123.service` and `foo@456.service`:
+
+```ini
+ExecStartPost=/usr/bin/etcdctl set /domains/example.com/%H:%i running
+```
+
+This gives us the flexibility to use a single unit file to announce multiple copies of the same container on a single machine (no port overlap) and on multiple machines (no hostname overlap).
+
+## Shutdown hooks
+
+While systemd allows to add custom hooks in `/usr/lib/systemd/system-shutdown/` that get run for `poweroff`/`halt`/`reboot`/`kexec` events, this path is not writable on Flatcar Container Linux. Therefore, regular units need to be used for running, e.g., a special cleanup action on shutdown:
+
+```ini
+[Unit]
+Description=Custom cleanup on shutdown
+DefaultDependencies=no
+After=final.target
+
+[Service]
+Type=oneshot
+ExecStart=bash -c 'echo bye; touch /bye'
+
+[Install]
+WantedBy=final.target
+```
+
+## More information
+
+- [`systemd.service` Docs](http://www.freedesktop.org/software/systemd/man/systemd.service.html)
+- [`systemd.unit` Docs](http://www.freedesktop.org/software/systemd/man/systemd.unit.html)
+- [`systemd.target` Docs](http://www.freedesktop.org/software/systemd/man/systemd.target.html)
diff --git a/content/docs/latest/setup/systemd/systemctl.md b/content/docs/latest/setup/systemd/systemctl.md
new file mode 100644
index 00000000..c96cf057
--- /dev/null
+++ b/content/docs/latest/setup/systemd/systemctl.md
@@ -0,0 +1,85 @@
+---
+title: Overview of systemctl
+linktitle: Using systemctl
+description: The most common operations done with systemctl in Flatcar.
+weight: 15
+aliases:
+ - ../../os/overview-of-systemctl
+ - ../../clusters/management/overview-of-systemctl
+---
+
+`systemctl` is your interface to systemd, the init system used in Flatcar Container Linux. All processes on a single machine are started and managed by systemd, including your Docker containers. You can learn more in our [Getting Started with systemd](getting-started) guide. Let's explore a few helpful `systemctl` commands. You must run all of these commands locally on the Flatcar Container Linux machine:
+
+## Find the status of a container
+
+The first step to troubleshooting with `systemctl` is to find the status of the item in question. If you have multiple `Exec` commands in your service file, you can see which one of them is failing and view the exit code. Here's a failing service that starts a private Docker registry in a container:
+
+```shell
+$ sudo systemctl status custom-registry.service
+
+custom-registry.service - Custom Registry Service
+ Loaded: loaded (/media/state/units/custom-registry.service; enabled-runtime)
+ Active: failed (Result: exit-code) since Sun 2013-12-22 12:40:11 UTC; 35s ago
+ Process: 10191 ExecStopPost=/usr/bin/etcdctl delete /registry (code=exited, status=0/SUCCESS)
+ Process: 10172 ExecStartPost=/usr/bin/etcdctl set /registry index.domain.com:5000 (code=exited, status=0/SUCCESS)
+ Process: 10171 ExecStart=/usr/bin/docker run -rm -p 5555:5000 54.202.26.87:5000/registry /bin/sh /root/boot.sh (code=exited, status=1/FAILURE)
+ Main PID: 10171 (code=exited, status=1/FAILURE)
+ CGroup: /system.slice/custom-registry.service
+
+Dec 22 12:40:01 localhost etcdctl[10172]: index.domain.com:5000
+Dec 22 12:40:01 localhost systemd[1]: Started Custom Registry Service.
+Dec 22 12:40:01 localhost docker[10171]: Unable to find image '54.202.26.87:5000/registry' (tag: latest) locally
+Dec 22 12:40:11 localhost docker[10171]: 2013/12/22 12:40:11 Invalid Registry endpoint: Get http://index2.domain.com:5000/v1/_ping: dial tcp 54.204.26.2...o timeout
+Dec 22 12:40:11 localhost systemd[1]: custom-registry.service: main process exited, code=exited, status=1/FAILURE
+Dec 22 12:40:11 localhost etcdctl[10191]: index.domain.com:5000
+Dec 22 12:40:11 localhost systemd[1]: Unit custom-registry.service entered failed state.
+Hint: Some lines were ellipsized, use -l to show in full.
+```
+
+You can see that `Process: 10171 ExecStart=/usr/bin/docker` exited with `status=1/FAILURE` and the log states that the index that we attempted to launch the container from, `54.202.26.87` wasn't valid, so the container image couldn't be downloaded.
+
+## List status of all units
+
+Listing all of the processes running on the box is too much information, but you can pipe the output into grep to find the services you're looking for. Here's all service files and their status:
+
+```shell
+sudo systemctl list-units | grep .service
+```
+
+## Start or stop a service
+
+```shell
+sudo systemctl start apache.service
+```
+
+```shell
+sudo systemctl stop apache.service
+```
+
+## Kill a service
+
+This will stop the process immediately:
+
+```shell
+sudo systemctl kill apache.service
+```
+
+## Restart a service
+
+Restarting a service is as easy as:
+
+```shell
+sudo systemctl restart apache.service
+```
+
+If you're restarting a service after you changed its service file, you will need to reload all of the service files before your changes take effect:
+
+```shell
+sudo systemctl daemon-reload
+```
+
+## More information
+
+- [Getting Started with systemd](getting-started)
+- [`systemd.service` Docs](http://www.freedesktop.org/software/systemd/man/systemd.service.html)
+- [`systemd.unit` Docs](http://www.freedesktop.org/software/systemd/man/systemd.unit.html)
diff --git a/content/docs/latest/setup/systemd/timers.md b/content/docs/latest/setup/systemd/timers.md
new file mode 100644
index 00000000..de0eae4c
--- /dev/null
+++ b/content/docs/latest/setup/systemd/timers.md
@@ -0,0 +1,80 @@
+---
+title: Scheduling tasks with systemd timers
+linktitle: Timers
+description: How to schedule recurring tasks with systemd timers.
+weight: 25
+aliases:
+ - ../../os/scheduling-tasks-with-systemd-timers
+ - ../../clusters/management/scheduling-tasks-with-systemd-timers
+---
+
+Flatcar Container Linux uses systemd timers (`cron` replacement) to schedule tasks. Here we will show you how you can schedule a periodic job.
+
+Let's create an alternative for this `crontab` job:
+
+```cron
+*/10 * * * * /usr/bin/date >> /tmp/date
+```
+
+Timers work directly with services' units. So we have to create `/etc/systemd/system/date.service` first:
+
+```ini
+[Unit]
+Description=Prints date into /tmp/date file
+
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/sh -c '/usr/bin/date >> /tmp/date'
+```
+
+Then we have to create timer unit with the same name but with `*.timer` suffix `/etc/systemd/system/date.timer`:
+
+```ini
+[Unit]
+Description=Run date.service every 10 minutes
+
+[Timer]
+OnCalendar=*:0/10
+```
+
+This config will run `date.service` every 10 minutes. You can also list all timers enabled in your system using `systemctl list-timers` command or `systemctl list-timers --all` to list all timers. Run `systemctl start date.timer` to enable timer.
+
+You can also create timer with different name, i.e. `task.timer`. In this case you have specify service unit name:
+
+```ini
+Unit=date.service
+```
+
+## Butane Config
+
+Here you'll find an example Butane Config demonstrating how to install systemd timers:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: date.service
+ contents: |
+ [Unit]
+ Description=Prints date into /tmp/date file
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/sh -c '/usr/bin/date >> /tmp/date'
+ - name: date.timer
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Run date.service every 10 minutes
+
+ [Timer]
+ OnCalendar=*:0/10
+
+ [Install]
+ WantedBy=multi-user.target
+```
+
+## Further reading
+
+If you're interested in more general systemd timers feature, check out the [full documentation](http://www.freedesktop.org/software/systemd/man/systemd.timer.html).
diff --git a/content/docs/latest/setup/systemd/udev-rules.md b/content/docs/latest/setup/systemd/udev-rules.md
new file mode 100644
index 00000000..bbe0d387
--- /dev/null
+++ b/content/docs/latest/setup/systemd/udev-rules.md
@@ -0,0 +1,90 @@
+---
+title: Using systemd and udev rules
+description: How to run units when specific udev events trigger.
+weight: 35
+aliases:
+ - ../../os/using-systemd-and-udev-rules
+ - ../../clusters/management/using-systemd-and-udev-rules
+---
+
+In our example we will use libvirt VM with Flatcar Container Linux and run systemd unit on disk attach event. First of all we have to create systemd unit file `/etc/systemd/system/device-attach.service`:
+
+```ini
+[Service]
+Type=oneshot
+ExecStart=/usr/bin/echo 'device has been attached'
+```
+
+This unit file will be triggered by our udev rule.
+
+Then we have to start `udevadm monitor --environment` to monitor kernel events.
+
+Once you've attached virtio libvirt device (i.e. `virsh attach-disk coreos /dev/VG/test vdc`) you'll see similar `udevadm` output:
+
+```text
+UDEV [545.954641] add /devices/pci0000:00/0000:00:18.0/virtio4/block/vdb (block)
+.ID_FS_TYPE_NEW=
+ACTION=add
+DEVNAME=/dev/vdb
+DEVPATH=/devices/pci0000:00/0000:00:18.0/virtio4/block/vdb
+DEVTYPE=disk
+ID_FS_TYPE=
+MAJOR=254
+MINOR=16
+SEQNUM=1327
+SUBSYSTEM=block
+USEC_INITIALIZED=545954447
+```
+
+According to text above udev generates event which contains directives (ACTION=add and SUBSYSTEM=block) we will use in our rule. It should look this way:
+
+```text
+ACTION=="add", SUBSYSTEM=="block", TAG+="systemd", ENV{SYSTEMD_WANTS}="device-attach.service"
+```
+
+That rule means that udev will trigger `device-attach.service` systemd unit on any block device attachment. Now when we use this command `virsh attach-disk coreos /dev/VG/test vdc` on host machine, we should see `device has been attached` message in Flatcar Container Linux node's journal. This example should be similar to USB/SAS/SATA device attach.
+
+## Butane Config example
+
+To use the unit and udev rule with a Container Linux Config, modify this example as needed:
+
+```yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /etc/udev/rules.d/01-block.rules
+ mode: 0644
+ contents:
+ inline: |
+ ACTION=="add", SUBSYSTEM=="block", TAG+="systemd", ENV{SYSTEMD_WANTS}="device-attach.service"
+systemd:
+ units:
+ - name: device-attach.service
+ contents: |
+ [Unit]
+ Description=Notify about attached device
+
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/echo 'device has been attached'
+```
+
+## More systemd examples
+
+For more systemd examples, check out these documents:
+
+ * [Customizing Docker][customizing-docker]
+ * [Customizing the SSH Daemon][customizing-sshd]
+ * [Using systemd Drop-In Units][drop-in]
+
+[drop-in]: drop-in-units
+[customizing-sshd]: ../security/customizing-sshd#changing-the-sshd-port
+[customizing-docker]: ../../container-runtimes/customizing-docker#using-a-dockercfg-file-for-authentication
+
+## More information
+
+systemd.service Docs
+systemd.unit Docs
+systemd.target Docs
+udev Docs
diff --git a/content/docs/latest/tutorial/_index.md b/content/docs/latest/tutorial/_index.md
new file mode 100644
index 00000000..6450703b
--- /dev/null
+++ b/content/docs/latest/tutorial/_index.md
@@ -0,0 +1,30 @@
+---
+title: Flatcar tutorial
+linktitle: Tutorial
+weight: 2
+---
+
+# Introduction
+
+This tutorial is a deep dive into some Flatcar fundamental concepts, it is designed to give you the key elements and resources to become autonomous with Flatcar. If you want to have a quickstart, please have a look to the [quickstart guide][quickstart].
+
+# Requirements
+
+* Linux VM with nested virtualization (or Linux host with KVM)
+* `qemu`
+* `terraform` (https://developer.hashicorp.com/terraform/downloads)
+* `butane` (can be used from the Docker image or directly from the binary: https://coreos.github.io/butane/getting-started/#getting-butane)
+* (OpenStack credentials for the "Hands-on 3")
+
+For each covered item, there is a demo and a few lines to explain what's going on under the hood - each item is independent, but it's recommended to follow them in the given order, especially if it is your first time operating Flatcar.
+
+* [Hands-on 1][hands-on-1]: Discovering
+* [Hands-on 2][hands-on-2]: Provisioning
+* [Hands-on 3][hands-on-3]: Deploying
+* [Hands-on 4][hands-on-4]: Updating
+
+[hands-on-1]: hands-on-1
+[hands-on-2]: hands-on-2
+[hands-on-3]: hands-on-3
+[hands-on-4]: hands-on-4
+[quickstart]: ../installing
diff --git a/content/docs/latest/tutorial/hands-on-1/_index.md b/content/docs/latest/tutorial/hands-on-1/_index.md
new file mode 100644
index 00000000..291a8c23
--- /dev/null
+++ b/content/docs/latest/tutorial/hands-on-1/_index.md
@@ -0,0 +1,56 @@
+---
+title: Hands on 1 - Discovering
+linktitle: Hands on 1 - Discovering
+weight: 2
+---
+
+The goal of this hands-on is to:
+* locally run a Flatcar instance
+* boot the instance and SSH into
+* run Nginx container on the instance
+
+# Step-by-step
+
+```bash
+# create a working directory
+mkdir flatcar; cd flatcar
+# get the qemu helper
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu.sh
+# get the latest stable release for qemu
+wget https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_qemu_image.img.bz2
+# extract the downloaded image
+bzip2 --decompress --keep flatcar_production_qemu_image.img.bz2
+# make the qemu helper executable
+chmod +x flatcar_production_qemu.sh
+# starts the flatcar image in console mode
+./flatcar_production_qemu.sh -- -display curses
+```
+
+NOTE: it's possible to connect to the instance via SSH:
+```bash
+$ cat ~/.ssh/config
+Host flatcar
+ User core
+ StrictHostKeyChecking no
+ UserKnownHostsFile /dev/null
+ HostName 127.0.0.1
+ Port 2222
+$ ssh flatcar
+```
+
+Once on the instance, you can try things and run a docker image:
+```
+# run an nginx docker image
+docker run --rm -p 80:80 -d nginx
+# assert it works
+curl localhost
+```
+
+# Resources
+
+* [documentation](../../installing/vms/qemu/#startup-flatcar-container-linux)
+
+# Demo
+
+* Video with timestamp: https://youtu.be/woZlGiLsKp0?t=472
+* Asciinema: https://asciinema.org/a/591438
diff --git a/content/docs/latest/tutorial/hands-on-2/_index.md b/content/docs/latest/tutorial/hands-on-2/_index.md
new file mode 100644
index 00000000..e4db29f3
--- /dev/null
+++ b/content/docs/latest/tutorial/hands-on-2/_index.md
@@ -0,0 +1,52 @@
+---
+title: Hands on 2 - Provisioning
+linktitle: Hands on 2 - Provisioning
+weight: 2
+---
+
+The goal of this hands-on is to:
+* provision a local Flatcar instance
+* write Butane configuration
+* generate the Ignition configuration
+* boot the instance with the config
+
+This is what we've done in the previous hands-on but now it's done _as code_, we want to deploy an Nginx container serving a "hello world" static webpage. As a reminder, Ignition configuration is used to provision a Flatcar instance, it's JSON file generated from a Butane configuration (YAML).
+
+# Step-by-step
+
+* Clone the tutorial repository and cd into it: `git clone https://github.com/tormath1/flatcar-tutorial ; cd flatcar-tutorial/hands-on-2`
+* Open `./config.yaml` and find the TODO section.
+* Add the following section (from https://coreos.github.io/butane/examples/#files):
+```
+storage:
+ files:
+ - path: /var/www/index.html
+ contents:
+ inline: Hello world
+```
+* Transpile the Butane configuration (`config.yaml`) to Ignition configuration (`config.json`) - it is possible to use the Butane [binary](https://coreos.github.io/butane/getting-started/#standalone-binary) or the Docker image
+```
+$ docker run --rm -i quay.io/coreos/butane:latest < config.yaml > config.json
+```
+* Download a Flatcar image (or use the zipped one from previous hands-on). NOTE: Ignition runs at first boot, it won't work if you reuse your the previously booted image, always decompress again each time you change your Ignition config.
+```
+cp ../hands-on-1/flatcar_production_qemu_image.img.bz2 .
+bzip2 --decompress --keep ./flatcar_production_qemu_image.img.bz2
+chmod +x flatcar_production_qemu.sh
+```
+* Start the image with Ignition configuration (`-i ./config.json`)
+```
+./flatcar_production_qemu.sh -i ./config.json -- -display curses
+```
+* Once on the instance, assert nginx works correctly (`curl localhost` or `systemctl status nginx.service`)
+
+# Resources
+
+* https://coreos.github.io/butane/examples/
+* https://coreos.github.io/ignition/rationale/
+* https://www.flatcar.org/docs/latest/installing/#concepts-configuration-and-provisioning
+
+# Demo
+
+* Video with timestamp: https://youtu.be/woZlGiLsKp0?t=676
+* Asciinema: https://asciinema.org/a/591440
diff --git a/content/docs/latest/tutorial/hands-on-3/_index.md b/content/docs/latest/tutorial/hands-on-3/_index.md
new file mode 100644
index 00000000..54fab605
--- /dev/null
+++ b/content/docs/latest/tutorial/hands-on-3/_index.md
@@ -0,0 +1,57 @@
+---
+title: Hands on 3 - Deploying
+linktitle: Hands on 3 - Deploying
+weight: 2
+---
+
+The goal of this hands-on is to:
+* deploy Flatcar instances with IaC (Terraform)
+* manipulate Terraform code
+* write Flatcar provisioning with Terraform
+* deploy Flatcar on OpenStack with Terraform
+
+This is a bundle of hands-on-1 and hands-on-2 but it's not a local deployment and _everything_ is as code.
+
+# Step-by-step
+
+```bash
+git clone https://github.com/tormath1/flatcar-tutorial; cd flatcar-tutorial/hands-on-3
+# go into the terraform directory
+cd terraform
+# update the config for creating index.html from previous hands-on
+vim server-configs/server1.yaml
+# init the terraform project locally
+terraform init
+# get the credentials and update the `terraform.tfvars` consequently
+# generate the plan and inspect it
+terraform plan
+# apply the plan
+terraform apply
+# go on the horizon dashboard and connect with terraform credentials
+# find your instance
+```
+
+One can assert that it works by accessing the console (click on the instance then "console")
+
+_NOTE_: it's possible to SSH into the instance but at the moment, it takes a SSH jump through the openstack (devstack) instance.
+```
+ssh -J user@[DEVSTACK-IP] -i ./.ssh/provisioning_private_key.pem -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@[SERVER-IP]
+```
+
+To destroy the instance:
+```
+# if you are happy, destroy everything
+terraform destroy
+```
+
+# Resources
+
+* https://github.com/flatcar/flatcar-terraform/ (NOTE: the terraform code used here is based on this repository)
+* https://www.flatcar.org/docs/latest/installing/cloud/openstack/
+
+# Demo
+
+* Video with timestamp: https://youtu.be/woZlGiLsKp0?t=1395
+* Asciinema: https://asciinema.org/a/591442
+
+
diff --git a/content/docs/latest/tutorial/hands-on-4/_index.md b/content/docs/latest/tutorial/hands-on-4/_index.md
new file mode 100644
index 00000000..54247970
--- /dev/null
+++ b/content/docs/latest/tutorial/hands-on-4/_index.md
@@ -0,0 +1,47 @@
+---
+title: Hands on 4 - Updating
+linktitle: Hands on 4 - Updating
+weight: 2
+---
+
+The goal of this hands-on is to:
+* leverage auto-update feature
+* boot an old version of Flatcar (stable-3374.2.5 for example)
+* provision with ignition from hands-on-2
+* control the update
+
+Hint: two services are used:
+* `update-engine.service`: to download the update from a release server (Nebraska)
+* `locksmithd.service`: to handle the reboot strategy
+
+# Step-by-step
+
+```
+# download a previous version of Flatcar and the qemu helper
+$ wget https://stable.release.flatcar-linux.net/amd64-usr/3374.2.5/flatcar_production_qemu_image.img.bz2
+$ wget https://stable.release.flatcar-linux.net/amd64-usr/3374.2.5/flatcar_production_qemu.sh
+$ chmod +x flatcar_production_qemu.sh
+$ bzip2 --decompress ./flatcar_production_qemu_image.img.bz2
+# boot the instance with the nginx Ignition from a previous lab
+$ ./flatcar_production_qemu.sh -i ../hands-on-2/config.json -- -display curses
+# assert that `locksmithd.service` and `update-engine` are up and running
+$ systemctl status update-engine.service locksmithd.service
+# check the release number
+$ cat /etc/os-release
+# to accelerate the update, we can force it. NOTE: it's not required to do this in "real life" it's just to avoid waiting minutes before downloading the update!
+$ update_engine_client -update
+# once rebooted
+# check the release number
+$ cat /etc/os-release
+# assert that nginx is still running
+$ curl localhost
+```
+
+# Resources
+
+* https://www.flatcar.org/docs/latest/setup/releases/update-strategies/
+
+# Demo
+
+* Asciinema: https://asciinema.org/a/591443
+* Video with timestamp: https://youtu.be/woZlGiLsKp0?t=1762