This setup provisions a Kubernetes Cluster to be used with our trainings.
We use Hetzner as our cloud provider and RKE2 to create the kubernetes cluster. Kubernetes Cloud Controller Manager for Hetzner Cloud to provision lobalancer from a Kubernetes service (type Loadbalancer
) objects and also configure the networking & native routing for the Kubernetes cluster network traffic.
Cluster setup is based on our infrastructure setup.
In order to deploy our acend Kubernetes Cluster the following steps are necessary:
- Terraform to deploy base infrastructure
- VM's for controlplane and worker nodes
- Network
- Loadbalancer for Kubernetes API and RKE2
- Firewall
- Hetzner Cloud Controller Manager for the Kubernetes Cluster Networking
- Terraform to deploy and then ootstrap ArgoCD using our training-setup
- ArgoCD to deploy resources student/user resources and other components like
- Storage Provisioner (hcloud csi, longhorn)
- Ingresscontroller
- Cert-Manager
- Gitea
- etc
See our training-setup for details on how the bootstrapping works.
For more details on the cluster design and setup see the documentation in our main infrastructure repository.
ArgoCD is used to deploy components (e.g.) onto the cluster. ArgoCD is also used for the training itself.
There is a local admin
account. The password can be extracted with terraform output argocd-admin-password
Each student/user also get a local account.
Cert Manager is used to issue Certificates (Let's Encrypt).
The ACME Webhook for the hosttech DNS API is used for dns01
challenges with our DNS provider.
The following ClusterIssuer
are available:
letsencrypt-prod
: for general http01 challenge.letsencrypt-prod-acend
: for dns01 challenge using the hosttech acme webhook. The token for hosttech is stored in thehosttech-secret
Secret in Namespacecert-manager
The Kubernetes Cloud Controller Manager for Hetzner Cloud is deployed and allows to provision LoadBalancer based on Services with type LoadBalancer
.
The Cloud Controller Manager is also resposible to create all the necessary routes between the Kubernete Nodes. See Network Support for details.
To provision storage we use Hetzner CSI Driver.
The StorageClass hcloud-volumes
is available. Be aware, hcloud-volumes
are provisioned at our cloud provider and do cost. Furthermore we have limits ou how much storage we can provision or more precise, attache to a VM.
haproxy is used as ingress controller. haproxy
is the default IngressClass
As our Kubernetes Nodes have enough local disk available, we use longhorn as a additional storage solution. The longhorn
storageclass is set as the default storage class.
We use a local Gitea installation that is used in our trainings.
The training environment contains the following per student/user:
- Credentials
- All necessary namespaces
- RBAC to access the namespaces
- a Webshell per student/user.
- a Gitea account and a Git repository clone of our argocd-training-example
It is deployed with ArgoCD using ApplicationSets. The ApplicationSets are deployed with Terraform
There is a Welcome page deployed at https://welcome.${cluster_name}.{cluster_domain} which contains a list for each student/user with the URL for the Webshell and also credentials.
This repo shall be used as a module from in our training-setup