๐ Starship ๐ช is a next-generation Observability platform built on ๐ eBPFโWASM
๐ Starship is to modern Observability, as ChatGPT is to consumer knowledge discovery. ๐ eBPF enables instrumentation-free data collection, and WASM complements eBPF's inability to perform complex data processing.
Starship is developed by Tricorder Observability, proudly supported by MiraclePlus and the Open Source community.
The easiest way to get started with building Starship is to use the dev image:
git clone [email protected]:<fork>/Starship.git
cd Starship
# Luanch dev image container
devops/dev_image/run.sh
# Inside the container
bazel build src/...
devops/dev_image/run.sh
mounts the pwd
(which is the root of the cloned Starship repo)
to /starship
inside the dev image.
โธ๏ธ Helm-charts, install Starship on your Kubernetes cluster with helm.
We recommend Minikube v1.24.0. Starship deployment is broken on Kubernetes version 1.25 and newer version because of incompatbility of the bundled kube prometheus stack using Pod Security Policy, which was removed in Kubenetes 1.25. See issues/258.
minikube version
minikube version: v1.24.0
commit: 76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
# First start the minikube cluster, and make sure have at least 8 cpus and
# 8196 MB memory.
minikube start --profile=${USER} --cpus=8 --memory=8192
# Create a namespace for installing Starship.
# Do not use a different namespace, as our documentation uses this namespace
# consistently, and you might run into unexpected issues with a different
# namespace.
kubectl create namespace tricorder
kubectl config set-context --current --namespace=tricorder
# Add Starship's helm-charts and install Starship
helm repo add tricorder-starship https://tricorder-observability.github.io/Starship
helm install my-starship tricorder-starship/starship
You should see the following pods running on your cluster.
More details can be found at helm-charts installation.
Then follow the CLI build and install
to install starship-cli
.
Then expose the API Server http endpoint with kubectl port-forward
:
# This allows starship-cli accessing API Server with
# --api-address=localhost:8081
kubectl port-forward service/api-server 8081:80 -n tricorder
DO NOT use the Web UI, as it's not working right now issue/#80.
Then make sure you are the root of the Starship repo, and create a pre-built module:
starship-cli --api-address localhost:8081 module create \
--bcc-file-path=modules/ddos_event/ddos_event.bcc \
--wasm-file-path=modules/ddos_event/write_events_to_output.wasm \
--module-json-path=modules/ddos_event/module.json
Then deploy this module:
starship-cli --api-address=localhost:8081 module deploy -i 0aa9e5db_ffce_4276_b37e_0b2dd82814a1
starship-cli --api-address=localhost:8081 module deploy -i 0aa9e5db_ffce_4276_b37e_0b2dd82814a1
kubectl port-forward service/my-starship-grafana 8082:80 -n tricorder
Then open http://localhost:8082
, login Grafana with username admin
and password tricorder
.
Then click the Dashboards
->Browse
, and then select the dashboard named tricorder_<module_id>
.
You should see data reporting packets arriving with timestamp, as shown in the screenshot below.
Not yet very useful. We are working tirelessly ๐ฉโ๐จโ๐ป๐ป on micro-service tracing! Stay tuned! ๐ซถ
๐คฟ Before diving into the code base:
- Starship is built for Kubernetes platform. Starship provides all things you'll need to get started with Zero-Cost (or Zero-Friction) Observability.
- Starship provides
Service Map
, the most valuable information for understanding Cloud Native applications, and numerous other data, analytic, and visualization capabilities to satisfy the full spectrum of your needs in running and managing Cloud Native applications on Kubernetes. - The core of starship is the tricorder agent, which runs data collection modules written in your favorite language, and are executed in eBPF+WASM. You can write your own modules in C/C++ (Go, Rust, and more languages are coming).
We are working on supporting all major frontend languages of writing eBPF programs, including:
Additionally, libbpf-style eBPF binary object files are supported as well.
- Starship Tricorder (aka. Starship Agent): a data collection agent running as daemonset. Agent executes eBPF+WASM modules and export structured data to storage engine. The code lives in src/agent.
- Starship API Server: manages Tricorder agents, and Promscale & Grafana backend server; also supports management Web UI and CLI. The code lives in src/api-server.
- Starship CLI: the command line tool to use Starship on your Kubernetes cluster. The code lives in src/cli.
- Starship Web UI: a Web UI for using Starship. The code lives in ui.
- Promscale: A unified metric and
trace observability backend for Prometheus & OpenTelemetry. Starship use
Promscale
to support Prom and OTel. - Grafana: Starship use
Grafana
to visualize Observability data.
- Kube-state-metrics (KSM):
listens to the Kubernetes API server and generates metrics about the state of
the objects. Starship use
KSM
to expose cluster-level metrics. - Prometheus: collects metrics from
KSM
and then remote write toPromscale
. - OpenTelemetry: for distributed tracing and other awesome Observability features.
- Fork the repo
- Createing Pull Request
- Ask for review
You can use Ansible to provision development environment on your localhost.
First install ansible
:
sudo apt-get install ansible-core -y
git clone [email protected]:tricorder-observability/starship.git
cd starship
sudo devops/dev_image/ansible-playbook.sh devops/dev_image/dev.yaml
This installs a list of apt packages, and downloads and installs a list of other tools from online.
Afterwards, you need source the env var file to pick up the PATH environment variable (or put this into your shell's rc file):
source devops/dev_image/env.inc
Afterwards, run bazel build src/...
to build all targets in the Starship repo.
After making changes, run tools/cleanup.sh
to cleanup the codebase, and then push
the changes to the forked repo, and create Pull Request
on github Web UI.