-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Prom/Grafana dashboard for M3DB as well as docker-compose for local development of m3-stack #939
Add Prom/Grafana dashboard for M3DB as well as docker-compose for local development of m3-stack #939
Changes from 14 commits
e0ffcda
6e8a359
302c6a2
37b0bb4
377f3db
1508061
e3aa713
c599d70
de0f3f3
644561c
9aa8a09
e2d5f55
89c6342
75fc4f7
83c77b6
8a7c218
37b1f22
ef26fa0
1948889
36504df
57c480d
295f106
1eabbf5
72ab0e0
82fddcf
edd73a8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
FROM grafana/grafana:latest | ||
|
||
COPY ./docker/grafana/datasource.yaml /etc/grafana/provisioning/datasources/datasource.yaml |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
datasources: | ||
- name: Prometheus | ||
type: prometheus | ||
access: proxy | ||
url: http://prometheus01:9090 |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,7 +18,7 @@ RUN cd /go/src/github.com/m3db/m3/ && \ | |
FROM alpine:latest | ||
LABEL maintainer="The M3DB Authors <[email protected]>" | ||
|
||
EXPOSE 2379/tcp 2380/tcp 7201/tcp 9000-9004/tcp | ||
EXPOSE 2379/tcp 2380/tcp 7201/tcp 7203/tcp 9000-9004/tcp | ||
|
||
COPY --from=builder /go/src/github.com/m3db/m3/bin/m3dbnode /bin/ | ||
COPY --from=builder /go/src/github.com/m3db/m3/src/dbnode/config/m3dbnode-local-etcd.yml /etc/m3dbnode/m3dbnode.yml | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
f6192b8d35518098fbb645ffd807a53749ffb00e | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't think this should be here, might be a good idea to regenerate your tokens too There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah I deleted the access token, thanks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Was actually deleted before I accidentally pushed it here, but I'll reach out to GitHub support as well There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK just verified the access token doesn't work by running:
which returned
So I must have deleted the token awhile back before it ever made it into this P.R We're good 👍 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah just make sure you delete the token in GH settings and should be good There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. wanna delete this file now? |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
# Local Development | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. mind linking a new section in https://github.com/m3db/m3/blob/master/DEVELOPER.md to this page |
||
|
||
This docker-compose file will setup the following environment: | ||
|
||
1. 3 M3DB nodes with a single node acting as an EtcD seed | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: |
||
2. 1 M3Coordinator node | ||
3. 1 Grafana node (with a pre-configured Prometheus source) | ||
4. 1 Prometheus node that scrapes the M3DB/M3Coordinator nodes and writes the metrics to M3Coordinator | ||
|
||
## Usage | ||
|
||
Use the `start.sh` and `stop.sh` scripts | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: Maybe give these scripts more descriptive names? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. done |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,212 @@ | ||
coordinator: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. re: dbnode_config.yaml and coordinator config here - i'm a little concerned about the duplication of configs - it's one more thing to keep in sync. Anyway we can merge with others we already have? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Already sync'd with @robskillington and this is the best we can do for now until we can get some kind of templating system, I already did some kind of weird stuff to avoid having 3 different m3db configs There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sigh |
||
listenAddress: | ||
type: "config" | ||
value: "0.0.0.0:7201" | ||
|
||
metrics: | ||
scope: | ||
prefix: "coordinator" | ||
prometheus: | ||
handlerPath: /metrics | ||
listenAddress: 0.0.0.0:7203 # until https://github.com/m3db/m3/issues/682 is resolved | ||
sanitization: prometheus | ||
samplingRate: 1.0 | ||
extended: none | ||
|
||
db: | ||
logging: | ||
level: info | ||
|
||
metrics: | ||
prometheus: | ||
handlerPath: /metrics | ||
sanitization: prometheus | ||
samplingRate: 1.0 | ||
extended: detailed | ||
|
||
listenAddress: 0.0.0.0:9000 | ||
clusterListenAddress: 0.0.0.0:9001 | ||
httpNodeListenAddress: 0.0.0.0:9002 | ||
httpClusterListenAddress: 0.0.0.0:9003 | ||
debugListenAddress: 0.0.0.0:9004 | ||
|
||
hostID: | ||
resolver: environment | ||
envVarName: M3DB_HOST_ID | ||
|
||
client: | ||
writeConsistencyLevel: majority | ||
readConsistencyLevel: unstrict_majority | ||
writeTimeout: 10s | ||
fetchTimeout: 15s | ||
connectTimeout: 20s | ||
writeRetry: | ||
initialBackoff: 500ms | ||
backoffFactor: 3 | ||
maxRetries: 2 | ||
jitter: true | ||
fetchRetry: | ||
initialBackoff: 500ms | ||
backoffFactor: 2 | ||
maxRetries: 3 | ||
jitter: true | ||
backgroundHealthCheckFailLimit: 4 | ||
backgroundHealthCheckFailThrottleFactor: 0.5 | ||
|
||
gcPercentage: 100 | ||
|
||
writeNewSeriesAsync: true | ||
writeNewSeriesLimitPerSecond: 1048576 | ||
writeNewSeriesBackoffDuration: 2ms | ||
|
||
bootstrap: | ||
bootstrappers: | ||
- filesystem | ||
- peers | ||
- commitlog | ||
- uninitialized_topology | ||
fs: | ||
numProcessorsPerCPU: 0.125 | ||
|
||
cache: | ||
series: | ||
policy: lru | ||
|
||
commitlog: | ||
flushMaxBytes: 524288 | ||
flushEvery: 1s | ||
queue: | ||
calculationType: fixed | ||
size: 2097152 | ||
blockSize: 10m | ||
|
||
fs: | ||
filePathPrefix: /var/lib/m3db | ||
writeBufferSize: 65536 | ||
dataReadBufferSize: 65536 | ||
infoReadBufferSize: 128 | ||
seekReadBufferSize: 4096 | ||
throughputLimitMbps: 100.0 | ||
throughputCheckEvery: 128 | ||
|
||
repair: | ||
enabled: false | ||
interval: 2h | ||
offset: 30m | ||
jitter: 1h | ||
throttle: 2m | ||
checkInterval: 1m | ||
|
||
pooling: | ||
blockAllocSize: 16 | ||
type: simple | ||
seriesPool: | ||
size: 262144 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
blockPool: | ||
size: 262144 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
encoderPool: | ||
size: 262144 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
closersPool: | ||
size: 104857 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
contextPool: | ||
size: 262144 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
segmentReaderPool: | ||
size: 16384 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
iteratorPool: | ||
size: 2048 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
fetchBlockMetadataResultsPool: | ||
size: 65536 | ||
capacity: 32 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
fetchBlocksMetadataResultsPool: | ||
size: 32 | ||
capacity: 4096 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
hostBlockMetadataSlicePool: | ||
size: 131072 | ||
capacity: 3 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
blockMetadataPool: | ||
size: 65536 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
blockMetadataSlicePool: | ||
size: 65536 | ||
capacity: 32 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
blocksMetadataPool: | ||
size: 65536 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
blocksMetadataSlicePool: | ||
size: 32 | ||
capacity: 4096 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
identifierPool: | ||
size: 262144 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
bytesPool: | ||
buckets: | ||
- capacity: 16 | ||
size: 524288 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
- capacity: 32 | ||
size: 262144 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
- capacity: 64 | ||
size: 131072 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
- capacity: 128 | ||
size: 65536 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
- capacity: 256 | ||
size: 65536 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
- capacity: 1440 | ||
size: 16384 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
- capacity: 4096 | ||
size: 8192 | ||
lowWatermark: 0.7 | ||
highWatermark: 1.0 | ||
|
||
config: | ||
service: | ||
env: default_env | ||
zone: embedded | ||
service: m3db | ||
cacheDir: /var/lib/m3kv | ||
etcdClusters: | ||
- zone: embedded | ||
endpoints: | ||
- m3db_seed:2379 | ||
seedNodes: | ||
initialCluster: | ||
- hostID: m3db_seed | ||
endpoint: http://m3db_seed:2380 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
version: "3.5" | ||
services: | ||
m3db_seed: | ||
networks: | ||
- backend | ||
build: | ||
context: ../../../ | ||
dockerfile: ./docker/m3dbnode/Dockerfile | ||
image: m3dbnode01:latest | ||
volumes: | ||
- "./dbnode_config.yml:/etc/m3dbnode/m3dbnode.yml" | ||
environment: | ||
- M3DB_HOST_ID=m3db_seed | ||
m3db_data01: | ||
networks: | ||
- backend | ||
build: | ||
context: ../../../ | ||
dockerfile: ./docker/m3dbnode/Dockerfile | ||
image: m3dbnode02:latest | ||
volumes: | ||
- "./dbnode_config.yml:/etc/m3dbnode/m3dbnode.yml" | ||
environment: | ||
- M3DB_HOST_ID=m3db_data01 | ||
m3db_data02: | ||
networks: | ||
- backend | ||
build: | ||
context: ../../../ | ||
dockerfile: ./docker/m3dbnode/Dockerfile | ||
image: m3dbnode03:latest | ||
volumes: | ||
- "./dbnode_config.yml:/etc/m3dbnode/m3dbnode.yml" | ||
environment: | ||
- M3DB_HOST_ID=m3db_data02 | ||
coordinator01: | ||
expose: | ||
- "7201" | ||
- "7203" | ||
- "7208" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. remove |
||
ports: | ||
- "0.0.0.0:7201:7201" | ||
- "0.0.0.0:7203:7203" | ||
- "0.0.0.0:7208:7208" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. remove |
||
networks: | ||
- backend | ||
build: | ||
context: ../../../ | ||
dockerfile: ./docker/m3coordinator/Dockerfile | ||
image: m3coordinator01:latest | ||
volumes: | ||
- "./:/etc/m3coordinator/" | ||
prometheus01: | ||
expose: | ||
- "9090" | ||
ports: | ||
- "0.0.0.0:9090:9090" | ||
networks: | ||
- backend | ||
image: prom/prometheus:latest | ||
volumes: | ||
- "./:/etc/prometheus/" | ||
grafana2: | ||
build: | ||
context: ../../../ | ||
dockerfile: ./docker/grafana/Dockerfile | ||
expose: | ||
- "3000" | ||
ports: | ||
- "0.0.0.0:3000:3000" | ||
networks: | ||
- backend | ||
image: grafana/grafana:latest | ||
networks: | ||
backend: |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
listenAddress: | ||
type: "config" | ||
value: "0.0.0.0:7201" | ||
|
||
metrics: | ||
scope: | ||
prefix: "coordinator" | ||
prometheus: | ||
handlerPath: /metrics | ||
listenAddress: 0.0.0.0:7203 # until https://github.com/m3db/m3/issues/682 is resolved | ||
sanitization: prometheus | ||
samplingRate: 1.0 | ||
extended: none | ||
|
||
clusters: | ||
- namespaces: | ||
- namespace: prometheus_metrics | ||
type: unaggregated | ||
retention: 48h | ||
client: | ||
config: | ||
service: | ||
env: default_env | ||
zone: embedded | ||
service: m3db | ||
cacheDir: /var/lib/m3kv | ||
etcdClusters: | ||
- zone: embedded | ||
endpoints: | ||
- m3db_seed:2379 | ||
writeConsistencyLevel: majority | ||
readConsistencyLevel: unstrict_majority | ||
writeTimeout: 10s | ||
fetchTimeout: 15s | ||
connectTimeout: 20s | ||
writeRetry: | ||
initialBackoff: 500ms | ||
backoffFactor: 3 | ||
maxRetries: 2 | ||
jitter: true | ||
fetchRetry: | ||
initialBackoff: 500ms | ||
backoffFactor: 2 | ||
maxRetries: 3 | ||
jitter: true | ||
backgroundHealthCheckFailLimit: 4 | ||
backgroundHealthCheckFailThrottleFactor: 0.5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you need this change once you delete the file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where goreleaser expects it to be so might save someone else the trouble
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
meh separate PRs mate =P
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fiiiiiine haha