Skip to content

Commit

Permalink
Start quickstart rewrite
Browse files Browse the repository at this point in the history
  • Loading branch information
ChrisChinchilla committed Aug 17, 2020
1 parent 97279a4 commit 1bbca4b
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 95 deletions.
1 change: 1 addition & 0 deletions docs-beta/config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ theme = "docs-theme"
baseURL = "/"
languageCode = "en-US"
defaultContentLanguage = "en"
staticDir = ["static"]

title = "M3DB Documentation"
metaDataFormat = "yaml"
Expand Down
116 changes: 21 additions & 95 deletions docs-beta/content/quickstart/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,109 +2,35 @@
title = "Quickstart"
date = 2020-04-21T20:46:17-04:00
weight = 3
chapter = true
pre = "<b>3. </b>"
+++

### M3DB Single Node Deployment
Deploying a single-node cluster is a great way to experiment with M3DB and get a feel for what it has to offer. Our Docker image by default configures a single M3DB instance as one binary containing:
An M3DB storage instance (m3dbnode) for timeseries storage. This includes an embedded tag-based metrics index, as well as as an embedded etcd server for storing the above mentioned cluster topology and runtime configuration.
A "coordinator" instance (m3coordinator) for writing and querying tagged metrics, as well as managing cluster topology and runtime configuration.
To begin, first start up a Docker container with port 7201 (used to manage the cluster topology), port 7203 which is where Prometheus scrapes metrics produced by M3DB and M3Coordinator, and port 9003 (used to read and write metrics) exposed. We recommend you create a persistent data directory on your host for durability:
docker pull quay.io/m3db/m3dbnode:latest
docker run -p 7201:7201 -p 7203:7203 -p 9003:9003 --name m3db -v $(pwd)/m3db_data:/var/lib/m3db quay.io/m3db/m3dbnode:latest
The official M3DB Docker image by default configures a single M3DB instance as one binary
containing:

- An M3DB storage instance ("m3dbnode") for timeseries storage. It includes an embedded tag-based metrics index, and an etcd server for storing the cluster topology and runtime configuration.
- A coordinator instance ("m3coordinator") for writing and querying tagged metrics, as well as managing cluster topology and runtime configuration.

Note: For the single node case, we use this sample config file. If you inspect the file, you'll see that all the configuration is namespaced by coordinator or db. That's because this setup runs M3DB and M3Coordinator as one application. While this is convenient for testing and development, you'll want to run clustered M3DB with a separate M3Coordinator in production. You can read more about that here..
Next, create an initial namespace for your metrics in the database using the cURL below. Keep in mind that the provided namespaceName must match the namespace in the local section of the M3Coordinator YAML configuration, and if you choose to add any additional namespaces you'll need to add them to the local section of M3Coordinator's YAML config as well.
curl -X POST http://localhost:7201/api/v1/database/create -d '{
"type": "local",
"namespaceName": "default",
"retentionTime": "12h"
}'
{{% notice warning %}}
Deploying a single-node M3DB cluster is a great way to experiment with M3DB and get an idea of what it has to offer, but is not designed for production use.
{{% /notice %}}

Note: The api/v1/database/create endpoint is abstraction over two concepts in M3DB called placements and namespaces. If a placement doesn't exist, it will create one based on the type argument, otherwise if the placement already exists, it just creates the specified namespace. For now it's enough to just understand that it creates M3DB namespaces (tables), but if you're going to run a clustered M3 setup in production, make sure you familiarize yourself with the links above.
Placement initialization may take a minute or two and you can check on the status of this by running the following:
curl http://localhost:7201/api/v1/placement | jq .
## Start Docker Container

Once all of the shards become AVAILABLE, you should see your node complete bootstrapping! Don't worry if you see warnings or errors related to a local cache file, such as [W] could not load cache from file /var/lib/m3kv/m3db_embedded.json. Those are expected for a local instance and in general any warn-level errors (prefixed with [W]) should not block bootstrapping.
02:28:30.008072[I] updating database namespaces [{adds [default]} {updates []} {removals []}]
02:28:30.270681[I] node tchannelthrift: listening on 0.0.0.0:9000
02:28:30.271909[I] cluster tchannelthrift: listening on 0.0.0.0:9001
02:28:30.519468[I] node httpjson: listening on 0.0.0.0:9002
02:28:30.520061[I] cluster httpjson: listening on 0.0.0.0:9003
02:28:30.520652[I] bootstrap finished [{namespace metrics} {duration 55.4µs}]
02:28:30.520909[I] bootstrapped
The Docker container exposes three ports:

The node also self-hosts its OpenAPI docs, outlining available endpoints. You can access this by going to localhost:7201/api/v1/openapi in your browser.
<!-- TODO: The Prometheus scraping point needs further explanation -->

Now you can experiment with writing tagged metrics:
curl -sS -X POST http://localhost:9003/writetagged -d '{
"namespace": "default",
"id": "foo",
"tags": [
{
"name": "__name__",
"value": "user_login"
},
{
"name": "city",
"value": "new_york"
},
{
"name": "endpoint",
"value": "/request"
}
],
"datapoint": {
"timestamp": '"$(date "+%s")"',
"value": 42.123456789
}
}
'
- `7201` to manage the cluster topology
- `7203` for Prometheus to scrape the metrics produced by M3DB and M3Coordinator
- `9003` to read and write metrics

Note: In the above example we include the tag __name__. This is because __name__ is a reserved tag in Prometheus and will make querying the metric much easier. For example, if you have M3Query setup as a Prometheus datasource in Grafana, you can then query for the metric using the following PromQL query:
user_login{city="new_york",endpoint="/request"}
The command below creates a persistent data directory on the host operating system to maintain durability and persistence between container restarts.

And reading the metrics you've written using the M3DB /query endpoint:
curl -sS -X POST http://localhost:9003/query -d '{
"namespace": "default",
"query": {
"regexp": {
"field": "city",
"regexp": ".*"
}
},
"rangeStart": 0,
"rangeEnd": '"$(date "+%s")"'
}' | jq .
```shell
docker pull quay.io/m3db/m3dbnode:latest
docker run -p 7201:7201 -p 7203:7203 -p 9003:9003 --name m3db -v $(pwd)/m3db_data:/var/lib/m3db quay.io/m3db/m3dbnode:latest
```

{
"results": [
{
"id": "foo",
"tags": [
{
"name": "__name__",
"value": "user_login"
},
{
"name": "city",
"value": "new_york"
},
{
"name": "endpoint",
"value": "/request"
}
],
"datapoints": [
{
"timestamp": 1527039389,
"value": 42.123456789
}
]
}
],
"exhaustive": true
}
<!-- TODO: Perfect image, pref with terminalizer -->

Now that you've got the M3 stack up and running, take a look at the rest of our documentation to see how you can integrate with Prometheus and Graphite
![Docker pull and run](/docker-install.gif)
Binary file added docs-beta/static/docker-install.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 1bbca4b

Please sign in to comment.