Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added documentation on EKS and KEDA #232

Merged
merged 1 commit into from
Nov 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/user-guide/cookbooks/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
- [x] [Start/Stop EC2/RDS instances using schedule or manual endpoint](./schedule-start-stop-ec2.md)
- [x] [Calculate VPC subnet CIDRs](./VPC-subnet-calculator.md)
- [x] [Kubernetes in different stages](./k8s.md)
- [x] [KEDA: Kubernetes autoscaling](./k8s.keda.md)
- [x] [Encrypting/decrypting files with SOPS+KMS](./sops-kms.md)
- [x] [Enable/Disable nat gateway](./enable-nat-gateway.md)
- [x] [ArgoCD add external cluster](./argocd-external-cluster.md)
Expand Down
202 changes: 202 additions & 0 deletions docs/user-guide/cookbooks/k8s.keda.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
# K8s pod autoscaling with KEDA

Kubernetes, a powerful container orchestration platform, revolutionized the way applications are deployed and managed. However, scaling applications to meet fluctuating workloads can be a complex task. KEDA, a Kubernetes-based Event-Driven Autoscaler, provides a simple yet effective solution to automatically scale Kubernetes Pods based on various metrics, including resource utilization, custom metrics, and external events.

## Goal

To install and configure KEDA on an EKS Cluster created on the [**binbash Leverage**](https://leverage.binbash.co/) way.

!!! Note
To read more on how to create the EKS Cluster on the [**binbash Leverage**](https://leverage.binbash.co/) way, read [here](./k8s.md).

**Note** for the following example we will be using a Kedacore plugin called [http-add-on](https://github.com/kedacore/http-add-on/).

!!! Note
To lear more about KEDA read [the official site](https://keda.sh/docs/2.15/).

![KEDA](https://keda.sh/img/logos/keda-icon-color.png)

### Assumptions

We are assuming the [**binbash Leverage**](https://leverage.binbash.co/) [Landing Zone](https://leverage.binbash.co/try-leverage/) is deployed, an account called `apps-devstg` was created and region `us-east-1` is being used. In any case you can adapt these examples to other scenarios.

---

---

## Installation

To install KEDA, just enable it in the components layer [here](https://github.com/binbashar/le-tf-infra-aws/tree/master/apps-devstg/us-east-1/k8s-eks/k8s-components).

Note `enable_keda` has to be enabled and, for the next example, also enable `enable_keda_http_add_on`.

To read more on how to enable components see [here](./k8s.md#eks).

## Giving it a try!

Now, let's create an example so we can show how KEDA Works

We will deploy a simple NGINX server.

These are the manifests for NGINX.

Let's create a namespace:

```yaml
apiVersion: v1
kind: Namespace
metadata:
name: demoapps
labels:
name: demoapps
```

This is the `nginx.yaml`:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: demoapps
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:latest
```

And this is the `service.yaml`:

```yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: demoapps
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
```

Deploy the resources using `kubectl`.

!!! Info
Note you can use `kubectl` through [**binbash Leverage**](https://leverage.binbash.co/), for more info read [here](../../leverage-cli/reference/kubectl/).

These are the deployed resources:

```shell
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-5bb85d69d8-g997n 1/1 Running 0 55s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.100.222.129 <none> 80:30414/TCP 54s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx-deployment 1/1 1 1 56s

NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-deployment-5bb85d69d8 1 1 1 56s
```

To try it, create a port-forward to the service and hit it from your browser.

```shell
kubectl port-forward -n demoapps svc/nginx-svc 8080:80
```

Try it!

```shell
curl localhost:8080
```

Now, it has no horizontal autoscaling tool (HPA), so it won't scale. I.e. it always will have one pod (as per the manifests).

Let's create then a KEDA autoscaler!

This is the manifest:

```yaml
apiVersion: http.keda.sh/v1alpha1
kind: HTTPScaledObject
metadata:
name: nginx-scaledobject
namespace: demoapps
spec:
hosts:
- "thehostname.internal"
targetPendingRequests: 100
scaleTargetRef:
deployment: nginx-deployment
service: nginx-svc
port: 80
replicas:
min: 0
max: 10
```

It can be seen an HPA and a custom resource were created:


```shell
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/keda-hpa-nginx-scaledobject Deployment/nginx-deployment <unknown>/100 (avg) 1 10 0 15s

NAME TARGETWORKLOAD TARGETSERVICE MINREPLICAS MAXREPLICAS AGE ACTIVE
nginx-scaledobject apps/v1/Deployment/nginx-deployment nginx-svc:80 0 10 52s
```

Note in the HPA no replicas are in place, i.e. no pods for our app. Now if you try:

```shell
kubectl port-forward -n demoapps svc/nginx-svc 8080:80
```

...it will fail, since no pod are available to answer the service.

Instead we have to hit a KEDA intercepter, that will route the traffic using the Hosts in the `HTTPScaledObject` object.

We've set `thehostname.internal` as the name, so let's port-forward the intercepter...

```shell
kubectl port-forward -n keda svc/keda-add-ons-http-interceptor-proxy 8080:8080
```

...and hit it with the Host header set:

```shell
curl localhost:8080 -H "Host: thehostname.internal"
```

If you check the HPA now it will have at least one replica.

!!! Note
Note the first query will have a delay since the pod has to be created.

Then if you cancel the port-forward and wait for a while, the deployment will be scaled-down to zero again.

Voilà!

!!! Note
There are other ways to configure KEDA, e.g. using Prometheus metrics, read more [here](https://keda.sh/docs/2.15/concepts/).

## Final thoughts

Given the scale-to-zero feature for pods, KEDA is a great match to Karpenter!
75 changes: 66 additions & 9 deletions docs/user-guide/cookbooks/k8s.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ See also [here](/user-guide/ref-architecture-eks/overview/).

### Goal

A cluster with one node (worker) and the control plane managed by AWS is deployed here.
A cluster with one node (worker) per AZ and the control plane managed by AWS is deployed here.

Cluster autoscaler is used to create more nodes.

Expand All @@ -275,26 +275,83 @@ These are the steps:

- 0 - copy the [K8s EKS layer](https://github.com/binbashar/le-tf-infra-aws/tree/master/apps-devstg/us-east-1/k8s-eks) to your [**binbash Leverage**](https://leverage.binbash.co/) project.
- paste the layer under the `apps-devstg/us-east-1` account/region directory
- 1 - apply layers
- 2 - access the cluster
- 1 - create the network
- 2 - Add path to the VPN Server
- 3 - create the cluster and dependencies/components
- 4 - access the cluster

---

#### 0 - Copy the layer

A few methods can be used to download the [KOPS layer](https://github.com/binbashar/le-tf-infra-aws/tree/master/apps-devstg/us-east-1/k8s-eks) directory into the [**binbash Leverage**](https://leverage.binbash.co/) project.
A few methods can be used to download the [K8s EKS layer](https://github.com/binbashar/le-tf-infra-aws/tree/master/apps-devstg/us-east-1/k8s-eks) directory into the [**binbash Leverage**](https://leverage.binbash.co/) project.

E.g. [this addon](https://addons.mozilla.org/en-US/firefox/addon/gitzip/?utm_source=addons.mozilla.org&utm_medium=referral&utm_content=search) is a nice way to do it.

Paste this layer into the account/region chosen to host this, e.g. `apps-devstg/us-east-1/`, so the final layer is `apps-devstg/us-east-1/k8s-eks/`.
Paste this layer into the account/region chosen to host this, e.g. `apps-devstg/us-east-1/`, so the final layer is `apps-devstg/us-east-1/k8s-eks/`. Note you can change the layer name (and CIDRs and cluster name) if you already have an EKS cluster in this Account/Region.

#### 1 - Apply layers
#### 1 - Create the network

First go into each layer and config the Terraform S3 background key, CIDR for the network, names, addons, etc.
First go into the network layer (e.g. `apps-devstg/us-east-1/k8s-eks/network`) and config the Terraform S3 background key, CIDR for the network, names, etc.

```shell
cd apps-devstg/us-east-1/k8s-eks/network
```

Then, from inside the layer run:

```shell
leverage tf init
leverage tf apply
```

#### 2 - Add the path to the VPN server

Since we are working on a private subnet (as per the [**binbash Leverage**](https://leverage.binbash.co/) and the AWS Well Architected Framework best practices), we need to set the VPN routes up.

If you are using the [Pritunl VPN server](../VPN-server/) (as per the [**binbash Leverage**](https://leverage.binbash.co/) recommendations), add the route to the CIDR set in the step 1 to the server you are using to connect to the VPN.

Then, connect to the VPN to access the private space.

#### 3 - Create the cluster

First go into each layer and config the Terraform S3 background key, names, addons, the components to install, etc.

```shell
cd apps-devstg/us-east-1/k8s-eks/
```

Then apply layers as follow:

```shell
leverage tf init --layers network,cluster,identities,addons,k8s-components
leverage tf apply --layers network,cluster,identities,addons,k8s-components
leverage tf init --layers cluster,identities,addons,k8s-components
leverage tf apply --layers cluster,identities,addons,k8s-components
```

#### 4 - Access the cluster

Go into the cluster layer:

```shell
cd apps-devstg/us-east-1/k8s-eks/cluster
```

Use the embedded `kubectl` to config the context:

```shell
leverage kubectl configure
```

!!! Info
You can export the context to use it with stand alone `kubectl`.

Once this process is done, you'll end up with temporary credentials created for `kubectl`.

Now you can try a `kubectl` command, e.g.:

```shell
leverage kubectl get ns
```

!!! Info
If you've followed the [**binbash Leverage**](https://leverage.binbash.co/) recommendations, your cluster will live on a private subnet, so you need to connect to the VPN in order to access the K8s API.
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -298,6 +298,7 @@ nav:
- Start/Stop EC2/RDS instances using schedule or manual endpoint: "user-guide/cookbooks/schedule-start-stop-ec2.md"
- Calculate VPC cubnet CIDRs: "user-guide/cookbooks/VPC-subnet-calculator.md"
- Kubernetes in different stages: "user-guide/cookbooks/k8s.md"
- KEDA, Kubernetes autoscaling: "user-guide/cookbooks/k8s.keda.md"
- Encrypting/decrypting files with SOPS+KMS: "user-guide/cookbooks/sops-kms.md"
- Enable/Disable nat gateway: "user-guide/cookbooks/enable-nat-gateway.md"
- ArgoCD add external cluster: "user-guide/cookbooks/argocd-external-cluster.md"
Expand Down
Loading