diff --git a/README.md b/README.md index 59d48c5a..bf3e24d8 100644 --- a/README.md +++ b/README.md @@ -31,15 +31,14 @@ ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-p ```bash export CK8S_CONFIG_PATH=~/.ck8s/my-environment - ./bin/ck8s-kubespray init [] + ./bin/ck8s-kubespray init [] ``` Arguments: - * `prefix` will be used to differentiate this cluster from others in the same CK8S_CONFIG_PATH. - For now you need to set this to `wc` or `sc` if you want to install compliantkubernetes apps on top afterwards, this restriction will be removed later. - * `flavor` will determine some default values for a variety of config options. + - The init command accepts `wc` (*workload cluster*) or `sc` (*service cluster*) as first argument as to create separate folders for each cluster's configuration files. + - `flavor` will determine some default values for a variety of config options. Supported options are `default`, `gcp`, `aws`, `vsphere`, and `openstack`. - * `SOPS fingerprint` is the gpg fingerprint that will be used for SOPS encryption. + - `SOPS fingerprint` is the gpg fingerprint that will be used for SOPS encryption. You need to set this or the environment variable `CK8S_PGP_FP` the first time SOPS is used in your specified config path. 1. Edit the `inventory.ini` (found in your config path) to match the VMs (IP addresses and other settings that might be needed for your setup) that should be part of the cluster. @@ -55,24 +54,24 @@ ansible-playbook -e 'ansible_python_interpreter=/usr/bin/python3' --ask-become-p 1. Run kubespray to set up the kubernetes cluster: ```bash - ./bin/ck8s-kubespray apply [] + ./bin/ck8s-kubespray apply [] ``` Any `options` added will be forwarded to ansible. 1. Done. You should now have a working kubernetes cluster. - You should also have an encrypted kubeconfig at `/.state/kube_config_.yaml` that you can use to access the cluster. + You should also have an encrypted kubeconfig at `/.state/kube_config_.yaml` that you can use to access the cluster. ## Changing authorized SSH keys for a cluster Authorized SSH keys can be changed for a cluster using: ```bash -./bin/ck8s-kubespray apply-ssh [] +./bin/ck8s-kubespray apply-ssh [] ``` -It will set the public SSH key(s) found in`/-config/group_vars/all/ck8s-ssh-keys.yaml` as authorized keys in your cluster (just add the keys you want to be authorized as elements in `ck8s_ssh_pub_keys_list`). +It will set the public SSH key(s) found in`/-config/group_vars/all/ck8s-ssh-keys.yaml` as authorized keys in your cluster (just add the keys you want to be authorized as elements in `ck8s_ssh_pub_keys_list`). Note that the authorized SSH keys for the cluster will be set to these keys _exclusively_, removing any keys that may already be authorized, so make sure the list includes **every SSH key** that should be authorized. When running this command, the SSH keys are applied to each node in the cluster sequentially, in reverse inventory order (first the workers and then the masters). @@ -84,7 +83,7 @@ If the connection test fails, you may have lost your SSH access to the node; to You can reboot all nodes that wants to restart (usually to finish installing new packages) by running: ```bash -./bin/ck8s-kubespray reboot-nodes [--extra-vars manual_prompt=true] [] +./bin/ck8s-kubespray reboot-nodes [--extra-vars manual_prompt=true] [] ``` If you set `--extra-vars manual_prompt=true` then you get a manual prompt before each reboot so you can stop the playbook if you want. @@ -96,7 +95,7 @@ Note that this playbook requires you to use ansible version >= 2.10. You can remove a node from a ck8s cluster by running: ```bash -./bin/ck8s-kubespray remove-node [,,...] [] +./bin/ck8s-kubespray remove-node [,,...] [] ``` ### Known issues @@ -109,7 +108,7 @@ You can remove a node from a ck8s cluster by running: With the following command you can run any ansible playbook available in kubespray: ```bash -./bin/ck8s-kubespray run-playbook [] +./bin/ck8s-kubespray run-playbook [] ``` Where `playbook` is the filename of the playbook that you want to run, e.g. `cluster.yml` if you want to create a cluster (making the command functionally the same as our `ck8s-kubespray apply` command) or `scale.yml` if you want to just add more nodes. Remember to check the kubespray documentation before running a playbook. @@ -117,7 +116,7 @@ This will use the inventory, group-vars, and ssh key in your config path and the ## Kubeconfig -We recommend that you use OIDC kubeconfigs instead of regular cluster-admin kubeconfigs. The default settings will create OIDC kubeconfigs for you when you run `./bin/ck8s-kubespray apply `, but there are some variables you need to set. See the variables in: `-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml` in your config path. +We recommend that you use OIDC kubeconfigs instead of regular cluster-admin kubeconfigs. The default settings will create OIDC kubeconfigs for you when you run `./bin/ck8s-kubespray apply `, but there are some variables you need to set. See the variables in: `-config/group_vars/k8s_cluster/ck8s-k8s-cluster.yaml` in your config path. But if you need to use a regular cluster-admin kubeconfig in a break-glass situation, then you can ssh to one of the controleplane nodes and use the kubeconfig at `/etc/kubernetes/admin.conf`. We recommend that you do not copy that kubeconfig to your local host, when dealing with production clusters. @@ -126,6 +125,6 @@ For development you can skip OIDC and instead get a regular cluster admin kubeco The kubeconfig and OIDC cluster admin RBAC are managed with the playbooks `playbooks/kubeconfig.yml` and `playbooks/cluster_admin_rbac.yml`. You can run them manually with: ```bash -./bin/ck8s-kubespray run-playbook ../../playbooks/kubeconfig.yml -b -./bin/ck8s-kubespray run-playbook ../../playbooks/cluster_admin_rbac.yml -b +./bin/ck8s-kubespray run-playbook ../playbooks/kubeconfig.yml -b +./bin/ck8s-kubespray run-playbook ../playbooks/cluster_admin_rbac.yml -b ``` diff --git a/bin/apply-ssh.bash b/bin/apply-ssh.bash index be408daa..7578c0f4 100755 --- a/bin/apply-ssh.bash +++ b/bin/apply-ssh.bash @@ -1,7 +1,7 @@ #!/bin/bash # This script will run an ansible playbook. -# It's not to be executed on its own but rather via `ck8s-kubespray apply-ssh `. +# It's not to be executed on its own but rather via `ck8s-kubespray apply-ssh `. set -eu -o pipefail diff --git a/bin/apply.bash b/bin/apply.bash index a98b37e3..49268da8 100755 --- a/bin/apply.bash +++ b/bin/apply.bash @@ -2,7 +2,7 @@ # This script will create a kubernetes cluster using kubespray. # It will also install some python dependencies for kubespray in a virtual environment -# It's not to be executed on its own but rather via `ck8s-kubespray apply `. +# It's not to be executed on its own but rather via `ck8s-kubespray apply `. set -eu -o pipefail shopt -s globstar nullglob dotglob diff --git a/bin/ck8s-kubespray b/bin/ck8s-kubespray index eb7a12b5..734e887a 100755 --- a/bin/ck8s-kubespray +++ b/bin/ck8s-kubespray @@ -10,25 +10,25 @@ here="$(dirname "$(readlink -f "$0")")" usage() { echo "COMMANDS:" 1>&2 echo " init initialize the config path" 1>&2 - echo " args: []" 1>&2 + echo " args: []" 1>&2 echo " apply runs kubespray to create the cluster" 1>&2 - echo " args: []" 1>&2 + echo " args: []" 1>&2 echo " remove-node removes specified node from cluster" 1>&2 - echo " args: []" 1>&2 + echo " args: []" 1>&2 echo " run-playbook runs any ansible playbook in kubespray" 1>&2 - echo " args: []" 1>&2 + echo " args: []" 1>&2 echo " apply-ssh applies SSH keys from a file to a cluster" 1>&2 - echo " args: []" 1>&2 + echo " args: []" 1>&2 echo " reboot-nodes reboots all nodes in a cluster if needed" 1>&2 - echo " args: [--extra-vars manual_prompt=true] []" 1>&2 + echo " args: [--extra-vars manual_prompt=true] []" 1>&2 echo " prune-nerdctl removes unused container resources on all nodes" 1>&2 - echo " args: []" 1>&2 + echo " args: []" 1>&2 echo " upgrade prepares config for upgrade" 1>&2 echo " args: prepare" 1>&2 exit 1 } -if [ $# -lt 2 ]; then +if [ $# -lt 2 ] || [[ ! "${1}" = upgrade ]] && [[ ! "${2}" =~ ^(wc|sc)$ ]] || [[ ! "${2}" =~ ^(wc|sc|both)$ ]]; then usage else export prefix="${2}" diff --git a/bin/common.bash b/bin/common.bash index 6024017a..8fa24f92 100644 --- a/bin/common.bash +++ b/bin/common.bash @@ -7,6 +7,11 @@ : "${CK8S_CONFIG_PATH:?Missing CK8S_CONFIG_PATH}" : "${prefix:?Missing prefix}" +if [[ ! "${prefix}" =~ ^(wc|sc|both)$ ]]; then + echo "ERROR: invalid value set for \"prefix\", valid values are " 1>&2 + exit 1 +fi + # Check for this mistake https://github.com/koalaman/shellcheck/wiki/SC2088 # shellcheck disable=SC2088 if [[ "${CK8S_CONFIG_PATH:0:2}" == "~/" ]]; then diff --git a/bin/init.bash b/bin/init.bash index aa804864..0c588119 100755 --- a/bin/init.bash +++ b/bin/init.bash @@ -3,7 +3,7 @@ # This script takes care of initializing a CK8S configuration path for kubespray. # It writes the default configuration files to the config path and generates # some defaults where applicable. -# It's not to be executed on its own but rather via `ck8s-kubespray init ...`. +# It's not to be executed on its own but rather via `ck8s-kubespray init ``. set -eu -o pipefail shopt -s globstar nullglob dotglob diff --git a/bin/prune-nerdctl.bash b/bin/prune-nerdctl.bash index cd841b76..26909cd4 100755 --- a/bin/prune-nerdctl.bash +++ b/bin/prune-nerdctl.bash @@ -1,7 +1,7 @@ #!/bin/bash # This script will run an ansible playbook. -# It's not to be executed on its own but rather via `ck8s-kubespray prune-nerdctl `. +# It's not to be executed on its own but rather via `ck8s-kubespray prune-nerdctl `. set -eu -o pipefail diff --git a/bin/reboot-nodes.bash b/bin/reboot-nodes.bash index 43e1a09f..bfbdede1 100755 --- a/bin/reboot-nodes.bash +++ b/bin/reboot-nodes.bash @@ -1,7 +1,7 @@ #!/bin/bash # This script will run an ansible playbook. -# It's not to be executed on its own but rather via `ck8s-kubespray reboot-nodes `. +# It's not to be executed on its own but rather via `ck8s-kubespray reboot-nodes `. # Add the variable "manual_prompt = true" for a manual prompt. # The script may fail and in such situations rerun the script. diff --git a/bin/remove-node.bash b/bin/remove-node.bash index 7e073511..9df4d675 100755 --- a/bin/remove-node.bash +++ b/bin/remove-node.bash @@ -2,7 +2,7 @@ # This script will run the remove-node.yml playbook. # It will also install some python dependencies for kubespray in a virtual environment -# It's not to be executed on its own but rather via `ck8s-kubespray remove-node ` []. +# It's not to be executed on its own but rather via `ck8s-kubespray remove-node ` []. set -eu -o pipefail shopt -s globstar nullglob dotglob diff --git a/bin/run-playbook.bash b/bin/run-playbook.bash index 4a8b5af2..1aee9cf0 100755 --- a/bin/run-playbook.bash +++ b/bin/run-playbook.bash @@ -2,7 +2,7 @@ # This script will run an ansible playbook available in kubespray. # It will also install some python dependencies for kubespray in a virtual environment -# It's not to be executed on its own but rather via `ck8s-kubespray run-playbook `. +# It's not to be executed on its own but rather via `ck8s-kubespray run-playbook `. set -eu -o pipefail shopt -s globstar nullglob dotglob diff --git a/docs/citycloud-snippets.md b/docs/citycloud-snippets.md index 119810e2..7abba217 100644 --- a/docs/citycloud-snippets.md +++ b/docs/citycloud-snippets.md @@ -5,8 +5,6 @@ To set up a cluster on citycloud these snippets can be used Start by setting up some environments for this setup ```bash -SERVICE_CLUSTER="my-new-sc-cluster" -WORKLOAD_CLUSTERS=( "my-new-wc-cluster" ) PUB_SSH_KEY_FILE="${HOME}/.ssh/id_rsa.pub" ``` @@ -41,7 +39,7 @@ Set up the clusters into respective folders ```bash pushd kubespray -for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do +for CLUSTER in sc wc; do mkdir -p "${CK8S_CONFIG_PATH}/${CLUSTER}-config/group_vars" ln -s "${CK8S_CONFIG_PATH}/${CLUSTER}-config/" "inventory/$CLUSTER" # shellcheck disable=SC2016 @@ -73,7 +71,7 @@ k8s_allowed_remote_ips = ["1.2.3.4/32"] Now you're ready to deploy the infrastructure ```bash -for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do +for CLUSTER in sc wc; do pushd "kubespray/inventory/${CLUSTER}" terraform init ../../contrib/terraform/openstack terraform apply \ @@ -94,7 +92,7 @@ To set up kubernetes with compliantkubernetes-kubespray you can follow these ste Initialize the configuration with. ```bash -for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do +for CLUSTER in sc wc; do ./bin/ck8s-kubespray init "${CLUSTER}" openstack ~/.ssh/id_rsa ln -s "$(pwd)/kubespray/inventory/${CLUSTER}/tfstate-${CLUSTER}.tfstate" "${CK8S_CONFIG_PATH}/${CLUSTER}-config/" || true cp "kubespray/contrib/terraform/openstack/hosts" "${CK8S_CONFIG_PATH}/${CLUSTER}-config/inventory.ini" @@ -107,7 +105,7 @@ Check the variables in the `group_vars` folder for each cluster and make sure th Apply the configuration and set up kubernetes. ```bash -for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do +for CLUSTER in sc wc; do bin/ck8s-kubespray apply "${CLUSTER}" done ``` @@ -125,7 +123,7 @@ Later when you want to destroy the infrastructure. Then run the following: ```bash -for CLUSTER in "${SERVICE_CLUSTER}" "${WORKLOAD_CLUSTERS[@]}"; do +for CLUSTER in sc wc; do pushd "kubespray/inventory/${CLUSTER}" terraform destroy \ -var-file cluster.tfvars \