Skip to content

Commit

Permalink
fix: Using multiple backend Services (#6332)
Browse files Browse the repository at this point in the history
* rewrite of the guide

* Merge KIC 3.0 prerequisites in to KIC 2.x

---------

Co-authored-by: Michael Heap <[email protected]>
  • Loading branch information
Rajakavitha1 and mheap authored Oct 18, 2023
1 parent 5c87671 commit a8c2069
Show file tree
Hide file tree
Showing 3 changed files with 151 additions and 108 deletions.
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -188,4 +188,4 @@ DEPENDENCIES
rubocop

BUNDLED WITH
2.4.10
2.4.21
81 changes: 68 additions & 13 deletions app/_includes/md/kic/prerequisites.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,64 @@
<details markdown="1">
<summary>
<blockquote class="note">
<p style="cursor: pointer">Before you begin ensure that you have <u>Installed {{site.kic_product_name}} </u> in your Kubernetes cluster and are able to connect to Kong.</p>
<p style="cursor: pointer">Before you begin ensure that you have <u>Installed {{site.kic_product_name}}</u> {% unless include.disable_gateway_api %}with Gateway API support {% endunless %}in your Kubernetes cluster and are able to connect to Kong.</p>
</blockquote>
</summary>

## Prerequisites

{% unless include.disable_gateway_api %}
## Install the Gateway APIs
### Install the Gateway APIs

1. Install the Gateway API CRDs before installing {{ site.kic_product_name }}.

```bash
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.8.1/standard-install.yaml
```

{% if include.gateway_api_experimental %}

1. Install the experimental Gateway API CRDs to test this feature.

```bash
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v0.8.1/experimental-install.yaml
```
{% endif %}

If you wish to use the Gateway APIs examples, ensure that you enable support for [
Gateway APIs in KIC](/kubernetes-ingress-controller/{{page.kong_version}}/deployment/install-gateway-apis).
1. Create a `Gateway` and `GatewayClass` instance to use.

```bash
echo "
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: 'true'
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
" | kubectl apply -f -
```
{% endunless %}
The results should look like this:
```text
gatewayclass.gateway.networking.k8s.io/kong created
gateway.gateway.networking.k8s.io/kong created
```

## Prerequisites

### Install Kong
You can install Kong in your Kubernetes cluster using [Helm](https://helm.sh/).
Expand All @@ -29,17 +75,29 @@ You can install Kong in your Kubernetes cluster using [Helm](https://helm.sh/).
helm install kong kong/ingress -n kong --create-namespace
```

{% if include.gateway_api_experimental %}
1. Enable the Gateway API Alpha feature gate:

```bash
kubectl set env -n kong deployment/kong-controller CONTROLLER_FEATURE_GATES="GatewayAlpha=true" -c ingress-controller
```

The results should look like this:
```text
deployment.apps/kong-controller env updated
```
{% endif %}


### Test connectivity to Kong

Kubernetes exposes the proxy through a Kubernetes service. Run the following commands to store the load balancer IP address in a variable named `PROXY_IP`:

1. Populate `$PROXY_IP` for future commands:

```bash
HOST=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
PORT=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.spec.ports[0].port}')
export PROXY_IP=${HOST}:${PORT}
echo $PROXY_IP
export PROXY_IP=$(kubectl get svc --namespace kong kong-gateway-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $PROXY_IP
```

2. Ensure that you can call the proxy IP:
Expand All @@ -60,7 +118,4 @@ Kubernetes exposes the proxy through a Kubernetes service. Run the following com
{"message":"no Route matched with those values"}
```

If you are not able to connect to Kong, read the [deployment guide](/kubernetes-ingress-controller/{{ page.release }}/deployment/overview/).

</details>
</details>
176 changes: 82 additions & 94 deletions app/_src/kic-v2/guides/using-multiple-backends.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,109 +8,97 @@ stability_message: |

## Overview

HTTPRoute supports adding multiple Services under its
`BackendRefs` field. When you add multiple Services,
requests through the HTTPRoute are distributed across the Services. This guide
walks through creating an HTTPRoute with multiple backend Services.
HTTPRoute supports adding multiple Services under its `BackendRefs` field. When you add multiple Services,
requests through the HTTPRoute are distributed across the Services. This guide walks through creating an HTTPRoute with multiple backend Services.

{% include /md/kic/installation.md %}
{% include_cached /md/kic/prerequisites.md kong_version=page.kong_version disable_gateway_api=false %}

{% include /md/kic/class.md %}
## Deploy multiple Services with HTTPRoute

## Deploy multiple Services
1. Deploy a second echo Service so that you have a second `BackendRef` to use for traffic splitting:
```bash
kubectl apply -f {{site.links.web}}/assets/kubernetes-ingress-controller/examples/echo-services.yaml
```
The results should look like this:
```text
service/echo created
deployment.apps/echo created
service/echo2 created
deployment.apps/echo2 created
```

To do so, you can deploy a second echo Service so that you have
a second `BackendRef` to use for traffic splitting:
```bash
kubectl apply -f {{site.links.web}}/assets/kubernetes-ingress-controller/examples/echo-services.yaml
```
Response:
```text
service/echo created
deployment.apps/echo created
service/echo2 created
deployment.apps/echo2 created
```
1. Deploy an HTTPRoute that sends traffic to both the services. By default, traffic is distributed evenly across all services:

## Create a multi-Service HTTPRoute
```bash
echo 'apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: echo
annotations:
konghq.com/strip-path: "true"
spec:
parentRefs:
- name: kong
rules:
- matches:
- path:
type: PathPrefix
value: /echo
backendRefs:
- name: echo
kind: Service
port: 80
- name: echo2
kind: Service
port: 80
' | kubectl apply -f -
```
The results should look like this:
```text
httproute.gateway.networking.k8s.io/echo created
```

Now that those two Services are deployed, you can now deploy an HTTPRoute that
sends traffic to both of them. By default, traffic is distributed evenly across
all Services:

```bash
echo 'apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: echo
annotations:
konghq.com/strip-path: "true"
spec:
parentRefs:
- name: kong
rules:
- matches:
- path:
type: PathPrefix
value: /echo
backendRefs:
- name: echo
kind: Service
port: 80
- name: echo2
kind: Service
port: 80
' | kubectl apply -f -
```
Response:
```text
httproute.gateway.networking.k8s.io/echo created
```

Sending many requests through this route and tabulating the results will show
an even distribution of requests across the Services:
```bash
curl -s 192.168.96.0/echo/hostname?iteration=[1-200] -w "\n" | sort | uniq -c
```
Response:
```text
100 echo2-7cb798f47-gv6hs
100 echo-658c5ff5ff-tv275
```
1. Send multiple requests through this route and tabulating the results to check an even distribution of requests across the Services:
```bash
curl -s "$PROXY_IP/echo/hostname?iteration="{1..200} -w "\n" | sort | uniq -c
```
The results should look like this:
```text
100 echo2-7cb798f47-gv6hs
100 echo-658c5ff5ff-tv275
```

## Add Service weights

The `weight` field overrides the default distribution of requests across
Services. Each Service instead receives `weight / sum(all Service weights)`
percent of the requests. Add weights to the Services in the HTTPRoute's
backend list:
The `weight` field overrides the default distribution of requests across Services. Each Service instead receives `weight / sum(all Service weights)` percent of the requests.
1. Add weights to the Services in the HTTPRoute's backend list.
```bash
kubectl patch --type json httproute echo -p='[
{
"op":"add",
"path":"/spec/rules/0/backendRefs/0/weight",
"value":200
},
{ "op":"add",
"path":"/spec/rules/0/backendRefs/1/weight",
"value":100
}
]'
```
Response:
```text
httproute.gateway.networking.k8s.io/echo patched
```
```bash
kubectl patch --type json httproute echo -p='[
{
"op":"add",
"path":"/spec/rules/0/backendRefs/0/weight",
"value":200
},
{
"op":"add",
"path":"/spec/rules/0/backendRefs/1/weight",
"value":100
}
]'
```
The results should look like this:
```text
httproute.gateway.networking.k8s.io/echo patched
```
Sending the same requests will now show roughly 1/3 of the requests going to
`echo2` and 2/3 going to `echo`:
1. Send the same requests and roughly 1/3 of the requests go to `echo2` and 2/3 going to `echo`:
```bash
curl -s 192.168.96.0/echo/hostname?iteration=[1-200] -w "\n" | sort | uniq -c
```
Response:
```text
67 echo2-7cb798f47-gv6hs
133 echo-658c5ff5ff-tv275
```
```bash
curl -s "$PROXY_IP/echo/hostname?iteration="{1..200} -w "\n" | sort | uniq -c
```
The results should look like this:
```text
67 echo2-7cb798f47-gv6hs
133 echo-658c5ff5ff-tv275
```

0 comments on commit a8c2069

Please sign in to comment.