Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create benchmarks/receive for load and auto scale testing of receive #34

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .bingo/Variables.mk
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ $(MISSPELL): .bingo/misspell.mod
@echo "(re)installing $(GOBIN)/misspell-v0.3.4"
@cd .bingo && $(GO) build -modfile=misspell.mod -o=$(GOBIN)/misspell-v0.3.4 "github.com/client9/misspell/cmd/misspell"

PROMU := $(GOBIN)/promu-v0.5.0
PROMU := $(GOBIN)/promu-v0.8.1
$(PROMU): .bingo/promu.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
@echo "(re)installing $(GOBIN)/promu-v0.5.0"
@cd .bingo && $(GO) build -modfile=promu.mod -o=$(GOBIN)/promu-v0.5.0 "github.com/prometheus/promu"
@echo "(re)installing $(GOBIN)/promu-v0.8.1"
@cd .bingo && $(GO) build -modfile=promu.mod -o=$(GOBIN)/promu-v0.8.1 "github.com/prometheus/promu"

4 changes: 1 addition & 3 deletions .bingo/go.mod
Original file line number Diff line number Diff line change
@@ -1,3 +1 @@
module _ // Fake go.mod auto-created by 'bingo' for go -moddir compatibility with non-Go projects. Commit this file, together with other .mod files.

go 1.13
module _ // Fake go.mod auto-created by 'bingo' for go -moddir compatibility with non-Go projects. Commit this file, together with other .mod files.
4 changes: 2 additions & 2 deletions .bingo/promu.mod
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
module _ // Auto generated by https://github.com/bwplotka/bingo. DO NOT EDIT

go 1.14
go 1.16

require github.com/prometheus/promu v0.5.0
require github.com/prometheus/promu v0.8.1
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,4 @@ website/docs-pre-processed/
tmp/bin

.bin
/benchmarks/receive/thanos-receive-benchmark
5 changes: 5 additions & 0 deletions benchmarks/receive/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
FROM alpine

COPY ./thanos-receive-benchmark /usr/bin/thanos-receive-benchmark

ENTRYPOINT ["/usr/bin/thanos-receive-benchmark"]
29 changes: 29 additions & 0 deletions benchmarks/receive/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# thanos-receive benchmark
kakkoyun marked this conversation as resolved.
Show resolved Hide resolved

This benchmark was set up so that we have a proper way of testing auto-scaling Thanos Receivers.

So far I've only run the tests locally with kind on a pretty beefy machine (AMD Ryzen 3900X) -
you might want to tweak some replicas counts in the `run.sh` before giving it a go on your machine.

Most of the logic of scaling up and down and deleting some running pods while at full load is within all `run.sh`.

## Getting started

1. `kind create cluster` to have s local cluster (you can skip if you have some other cluster available).
1. `kubectl create namespace thanos` create the necessary Thanos namespace.
1. Clone kube-prometheus and run `kubectl apply -f ./manifests/setup/` and `kubectl apply -f ./manifests/` from its root.
metalmatze marked this conversation as resolved.
Show resolved Hide resolved
1. `kubectl delete alertmanagers.monitoring.coreos.com -n monitoring main` you can optionally delete alertmanagers.
1. `kubectl edit prometheuses.monitoring.coreos.com -n monitoring k8s` and edit the replicas to 1 for a simpler life during development.
1. Back in this repository run `kubectl apply -f ./benchmarks/receive/manifests/prometheus-operator` to configure the Prometheus to scrape out benchmark.
1. `kubectl apply -f ./benchmarks/receive/manifests/` to deploy the entire rest: Thanos Qurier, Thanos Receiver, Thanos Receive Router, Thanos Receive Controller and one instance of the custom Thanos Receive Benchmark.
1. In another terminal run `kubectl port-forward -n monitoring svc/grafana 3000` and log in to Grafana with `admin:admin`.
1. Upload the `ThanosReceiveBenchmark.json` dashboard.
1. Finally, run the benchmark with `./benchmarks/receive/run.sh`.

### Running another benchmark

1. Downscale all benchmark so there's no more traffic with `kubectl scale deployment -n thanos thanos-receive-benchmark --replicas 0`.
1. Wait until all Deployments & StatefulSets have minimum replica count (probably 3).
1. Delete all Receiver and Receive Router Pods with `kubectl delete pod -n thanos -l app.kubernetes.io/name=thanos-receive` && `kubectl delete pod -n thanos -l app.kubernetes.io/name=thanos-receive-route`.
1. Delete the Prometheus to start with fresh metrics `kubectl delete pod -n monitoring prometheus-k8s-0`.
1. Run the benchmark again with `./benchmarks/receive/run.sh`.
Loading