This repo is to:
- Demo Keda connection to confluent
- Demo scaling capabilities of Keda
Used Minikube To get endpoint for minikube
minikube service dep --url
touch main.go
// Add in basic server codes
go mod init confluent-keda-poc
go mod tidy
Reference:
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
- Spammed produce endpoint
http://127.0.0.1:56825/api/produce
- Confluent Cloud Consumer Lag
- specified to a scaling target of 100 total consumer lag
- Total Kafka consumer lag
-
Note that Specific topic scaling is not set for
topic_2
. This demo shows that KEDA is triggering based on total Kafka Consumer Lag
- HPA Trigger Scale Up to 2 replicas
- cpu scaler need HPAContainerMetrics feature enabled
Github Link: https://github.com/JosephABC/keda
Changes are in kafka_scaler.go
in getLagForPartition
function
-
Consumer Lag remain the same due to being stuck
-
Custom KEDA code excludes Consumer Lag for these partitions, hence 0 consumer lag shown in HPA
KEDA does not trigger scaling of consumer deployment based on these stuck partitions
- Message in a partition encounters error and is unable to be consumed and offset cannot be committed.
- Partition Key specified for topic. Large consumer lag observed on one/few particular partitions. Scaling out will probably have less effect on performance
- Metric watched is the total consumer lag for Topic or all topics subcribed by the consumer group
containerName
parameter requires Kubernetes cluster version1.20 or higher
withHPAContainerMetrics
feature enabled.
To Observe and kill process on local
netstat -anop | grep -i 5000
pkill <PID>
kill -9 <PID>
IMAGE_REGISTRY=docker.io IMAGE_REPO=josephangbc make publish
IMAGE_REGISTRY=docker.io IMAGE_REPO=josephangbc make deploy