Skip to content
This repository has been archived by the owner on Oct 23, 2019. It is now read-only.

Commit

Permalink
Adds example prometheus setup (#135)
Browse files Browse the repository at this point in the history
Fixes #121
  • Loading branch information
abesto authored and adriancole committed Oct 10, 2017
1 parent f94c29c commit 9f574ff
Show file tree
Hide file tree
Showing 4 changed files with 106 additions and 5 deletions.
25 changes: 20 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,14 +154,14 @@ To start the MySQL+Kafka configuration, run:

Then configure the [Kafka 0.10 sender](https://github.com/openzipkin/zipkin-reporter-java/blob/master/kafka10/src/main/java/zipkin/reporter/kafka10/KafkaSender.java)
or [Kafka 0.8 sender](https://github.com/openzipkin/zipkin-reporter-java/blob/master/kafka08/src/main/java/zipkin/reporter/kafka08/KafkaSender.java)
using a `bootstrapServers` value of `192.168.99.100:9092`.
using a `bootstrapServers` value of `192.168.99.100:9092`.

By default, this assumes your Docker host IP is 192.168.99.100. If this is
not the case, adjust `KAFKA_ADVERTISED_HOST_NAME` in `docker-compose-kafka10.yml`
and the `bootstrapServers` configuration of the kafka sender to match your
not the case, adjust `KAFKA_ADVERTISED_HOST_NAME` in `docker-compose-kafka10.yml`
and the `bootstrapServers` configuration of the kafka sender to match your
Docker host IP.

If you prefer to activate the
If you prefer to activate the
[Kafka 0.8 collector](https://github.com/openzipkin/zipkin/tree/master/zipkin-collector/kafka)
use `docker-compose-kafka.yml` instead of `docker-compose-kafka10.yml`:

Expand All @@ -180,7 +180,22 @@ To start the NGINX configuration, run:

This container doubles as a skeleton for creating proxy configuration around
Zipkin like authentication, dealing with CORS with zipkin-js apps, or
terminating SSL.
terminating SSL.

### Prometheus

Zipkin comes with a built-in Prometheus metric exporter. The main
`docker-compose.yml` file starts Prometheus configured to scrape Zipkin, exposes
it on port `9090`. You can open `$DOCKER_HOST_IP:9090` and start exploring the
metrics (which are available on the `/prometheus` endpoint of Zipkin).

`docker-compose.yml` also starts a Grafana container with authentication
disabled, exposing it on port 3000. On startup it's configured with the
Prometheus instance started by `docker-compose` as a data source, and imports
the dashboard published at https://grafana.com/dashboards/1598. This means that,
after running `docker-compose up`, you can open
`$DOCKER_IP:3000/dashboard/db/zipkin-prometheus` and play around with the
dashboard.

If you want to run the zipkin-ui standalone against a remote zipkin server, you
need to set `ZIPKIN_BASE_URL` accordingly:
Expand Down
30 changes: 30 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,3 +58,33 @@ services:
# - JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G
depends_on:
- storage

prometheus:
image: prom/prometheus
container_name: prometheus
ports:
- 9090:9090
depends_on:
- storage
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml

grafana:
image: grafana/grafana
container_name: grafana
ports:
- 3000:3000
depends_on:
- prometheus
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin

setup_grafana_datasource:
image: appropriate/curl
container_name: setup_grafana_datasource
depends_on:
- grafana
volumes:
- ./prometheus/create-datasource-and-dashboard.sh:/create.sh:ro
command: /create.sh
20 changes: 20 additions & 0 deletions prometheus/create-datasource-and-dashboard.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
#!/bin/sh

set -xeuo pipefail

if ! curl --retry 5 --retry-connrefused --retry-delay 0 -sf http://grafana:3000/api/dashboards/name/prom; then
curl -sf -X POST -H "Content-Type: application/json" \
--data-binary '{"name":"prom","type":"prometheus","url":"http://prometheus:9090","access":"proxy","isDefault":true}' \
http://grafana:3000/api/datasources
fi

dashboard_id=1598
last_revision=$(curl -sf https://grafana.com/api/dashboards/${dashboard_id}/revisions | grep '"revision":' | sed 's/ *"revision": \([0-9]*\),/\1/' | sort -n | tail -1)

echo '{"dashboard": ' > data.json
curl -s https://grafana.com/api/dashboards/${dashboard_id}/revisions/${last_revision}/download >> data.json
echo ', "inputs": [{"name": "DS_PROM", "pluginId": "prometheus", "type": "datasource", "value": "prom"}], "overwrite": false}' >> data.json
curl --retry-connrefused --retry 5 --retry-delay 0 -sf \
-X POST -H "Content-Type: application/json" \
--data-binary @data.json \
http://grafana:3000/api/dashboards/import
36 changes: 36 additions & 0 deletions prometheus/prometheus.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'zipkin'
scrape_interval: 5s
metrics_path: '/prometheus'
static_configs:
- targets: ['zipkin:9411']
metric_relabel_configs:
# Response code count
- source_labels: [__name__]
regex: '^counter_status_(\d+)_(.*)$'
replacement: '${1}'
target_label: status
- source_labels: [__name__]
regex: '^counter_status_(\d+)_(.*)$'
replacement: '${2}'
target_label: path
- source_labels: [__name__]
regex: '^counter_status_(\d+)_(.*)$'
replacement: 'http_requests_total'
target_label: __name__
# Received message count
- source_labels: [__name__]
regex: '(?:gauge|counter)_zipkin_collector_(.*)_([^_]*)'
replacement: '${2}'
target_label: transport
- source_labels: [__name__]
regex: '(?:gauge|counter)_zipkin_collector_(.*)_([^_]*)'
replacement: 'zipkin_collector_${1}'
target_label: __name__

0 comments on commit 9f574ff

Please sign in to comment.