In this demo we are going to setup a kubernetes cluster using kind and deploy 2 spring-boot apis. Then we will implement tracing using OpenTelemetry. The reliability platform setup is loosly coupled and distributed. In the end you should be able to see tracing information from the kubernetes core components and apis.
Command:
kind create cluster --config 00-kind-cluster-setup/kind-cluster-1.yaml --name cluster-1
1. One Control plane node
2. One worker node
1. cert-manager
2. namespaces (observability & project app-a)
Command:
kubectl apply -f 01-prerequisites-deploy/
Command:
kubectl apply -f 02-otel-operator-deploy/
1. OpenTelemetry Collector as DaemonSet (to collect core kubernetes traces)
2. OpenTelemetry Collector as a remote deployment (to collect traces from all sources and send to a sink)
Command:
kubectl apply -f 03-otel-central-platform/
1. Jaeger Operator
2. Jaeger Deployment
3. Zipkin Deployment
Command:
kubectl apply -f 04-jaeger-operator-deploy/
kubectl apply -f 05-jaeger-deploy/
kubectl apply -f 06-zipkin-deploy/
1. instrumentation config (e.g. java)
2. OpenTelemetry local collector
3. OpenTelemetry sidecar collector
Command:
kubectl apply -f 07-otel-project-platform/
-------------------- ------------------ ------------------
| Client(UI/Browser) | =======> | Front api service | =======> | Customer service |
-------------------- ------------------ ------------------
Reference: customer service
Reference: Front api service
1. Deploy customer service
2. Deploy back to front api service
Command:
kubectl apply -f 08-app-deploy/
Command:
kubectl apply -f 09-testpods-deploy/
Jaeger
kubectl port-forward --namespace observability $(kubectl get pods --namespace observability -l "app.kubernetes.io/instance=jaeger,app.kubernetes.io/component=all-in-one" -o jsonpath="{.items[0].metadata.name}") 16686:16686 &
Zipkin
kubectl port-forward --namespace observability $(kubectl get pods --namespace observability -l=app=zipkin -o jsonpath="{.items[0].metadata.name}") 9411:9411 &