Skip to content

Commit

Permalink
Link fixes
Browse files Browse the repository at this point in the history
Signed-off-by: ChrisChinchilla <[email protected]>
  • Loading branch information
ChrisChinchilla committed Sep 16, 2020
1 parent a4ca3db commit 8c300d5
Show file tree
Hide file tree
Showing 30 changed files with 47 additions and 47 deletions.
2 changes: 1 addition & 1 deletion docs/content/coordinator/api/remote.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ promremotecli_log 2019/06/25 04:13:56 write success
# quay.io/m3db/prometheus_remote_client_golang@sha256:fc56df819bff9a5a087484804acf3a584dd4a78c68900c31a28896ed66ca7e7b
```

For more details on querying data in PromQL that was written using this endpoint, see the [query API documentation](../../query_engine/api/).
For more details on querying data in PromQL that was written using this endpoint, see the [query API documentation](/../query_engine/api/).

## Remote Read

Expand Down
8 changes: 4 additions & 4 deletions docs/content/faqs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Yes, you can definitely do that. It's all just about setting the etcd endpoints
Yes, you can use the [Prometheus remote write client](https://github.com/m3db/prometheus_remote_client_golang/).

- **Why does my dbnode keep OOM’ing?**
Refer to the [troubleshooting guide](../troubleshooting).
Refer to the [troubleshooting guide](/troubleshooting).

- **Do you support PromQL?**
Yes, M3Query and M3Coordinator both support PromQL.
Expand All @@ -33,7 +33,7 @@ If you’re adding namespaces, the m3dbnode process will pickup the new namespac
If you’re removing or modifying an existing namespace, you’ll need to restart the m3dbnode process in order to complete the namespace deletion/modification process. It is recommended to restart one node at a time and wait for a node to be completely bootstrapped before restarting another node.

- **How do I set up aggregation in the coordinator?**
Refer to the [Aggregation section](../how_to/query) of the M3Query how-to guide.
Refer to the [Aggregation section](/how_to/query) of the M3Query how-to guide.

- **How do I set up aggregation using a separate aggregation tier?**
See this [WIP documentation](https://github.com/m3db/m3/pull/1741/files#diff-0a1009f86783ca8fd4499418e556c6f5).
Expand Down Expand Up @@ -65,7 +65,7 @@ etcdClusters:
```

- **How can I get a heap dump, cpu profile, etc.**
See our docs on the [/debug/dump api](../troubleshooting)
See our docs on the [/debug/dump api](/troubleshooting)

- **How much memory utilization should I run M3DB at?**
We recommend not going above 50%.
Expand All @@ -74,7 +74,7 @@ We recommend not going above 50%.
TBA

- **What is the recommended way to create a new namespace?**
Refer to the [Namespace configuration guide](../operational_guide/namespace_configuration).
Refer to the [Namespace configuration guide](/operational_guide/namespace_configuration).

- **How can I see the cardinality of my metrics?**
Currently, the best way is to go to the [M3DB Node Details Dashboard](https://grafana.com/grafana/dashboards/8126) and look at the `Ticking` panel. However, this is not entirely accurate because of the way data is stored in M3DB -- time series are stored inside time-based blocks that you configure. In actuality, the `Ticking` graph shows you how many unique series there are for the most recent block that has persisted. In the future, we plan to introduce easier ways to determine the number of unique time series.
12 changes: 6 additions & 6 deletions docs/content/how_to/cluster_hard_way.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ M3DB_HOST_ID=m3db001 m3dbnode -f config.yml

### Kernel

Ensure you review our [recommended kernel configuration](../operational_guide/kernel_configuration) before running M3DB in production as M3DB may exceed the default limits for some default kernel values.
Ensure you review our [recommended kernel configuration](/operational_guide/kernel_configuration) before running M3DB in production as M3DB may exceed the default limits for some default kernel values.

## Config files

Expand Down Expand Up @@ -107,8 +107,8 @@ m3dbnode -f <config-name.yml>

The recommended way to create a namespace and initialize a topology is to use the `/api/v1/database/create` api. Below is an example.

**Note:** In order to create a more custom setup, please refer to the [namespace configuration](../operational_guide/namespace_configuration) and
[placement configuration](../operational_guide/placement_configuration) guides, though this is discouraged.
**Note:** In order to create a more custom setup, please refer to the [namespace configuration](/operational_guide/namespace_configuration) and
[placement configuration](/operational_guide/placement_configuration) guides, though this is discouraged.

```shell
curl -X POST http://localhost:7201/api/v1/database/create -d '{
Expand Down Expand Up @@ -167,11 +167,11 @@ If you need to setup multiple namespaces, you can run the above `/api/v1/databas

### Replication factor (RF)

Recommended is RF3, where each replica is spread across failure domains such as a rack, data center or availability zone. See [Replication Factor Recommendations](../operational_guide/replication_and_deployment_in_zones) for more specifics.
Recommended is RF3, where each replica is spread across failure domains such as a rack, data center or availability zone. See [Replication Factor Recommendations](/operational_guide/replication_and_deployment_in_zones) for more specifics.

### Shards

See [placement configuration](../operational_guide/placement_configuration) to determine the appropriate number of shards to specify.
See [placement configuration](/operational_guide/placement_configuration) to determine the appropriate number of shards to specify.

## Test it out

Expand Down Expand Up @@ -216,4 +216,4 @@ curl -sS -X POST http://localhost:9003/query -d '{

## Integrations

[Prometheus as a long term storage remote read/write endpoint](../integrations/prometheus).
[Prometheus as a long term storage remote read/write endpoint](/integrations/prometheus).
4 changes: 2 additions & 2 deletions docs/content/how_to/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ curl -sSf -X POST localhost:7201/api/v1/placement -d '{
### Prometheus
As mentioned in our integrations [guide](../integrations/prometheus), M3DB can be used as a [remote read/write
As mentioned in our integrations [guide](/integrations/prometheus), M3DB can be used as a [remote read/write
endpoint](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cremote_write%3E) for Prometheus.
If you run Prometheus on your Kubernetes cluster you can easily point it at M3DB in your Prometheus server config:
Expand Down Expand Up @@ -311,4 +311,4 @@ certain nodes. Specifically:
choose.
2. Via `nodeAffinity` the pods prefer to run on nodes with the label `m3db.io/dedicated-m3db="true"`.
[kernel]: ../operational_guide/kernel_configuration.md
[kernel]: /operational_guide/kernel_configuration
8 changes: 4 additions & 4 deletions docs/content/how_to/query.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ weight: 4
---


m3query is used to query data that is stored in M3DB. For instance, if you are using the Prometheus remote write endpoint with [m3coordinator](../integrations/prometheus), you can use m3query instead of the Prometheus remote read endpoint. By doing so, you get all of the benefits of m3query's engine such as [block processing](http://m3db.github.io/m3/query_engine/architecture/blocks/). Furthermore, since m3query provides a Prometheus compatible API, you can use 3rd party graphing and alerting solutions like Grafana.
m3query is used to query data that is stored in M3DB. For instance, if you are using the Prometheus remote write endpoint with [m3coordinator](/integrations/prometheus), you can use m3query instead of the Prometheus remote read endpoint. By doing so, you get all of the benefits of m3query's engine such as [block processing](http://m3db.github.io/m3/query_engine/architecture/blocks/). Furthermore, since m3query provides a Prometheus compatible API, you can use 3rd party graphing and alerting solutions like Grafana.

## Configuration

Before setting up m3query, make sure that you have at least [one M3DB node running](single_node). In order to start m3query, you need to configure a `yaml` file, that will be used to connect to M3DB. Here is a link to a [sample config](https://github.com/m3db/m3/blob/master/src/query/config/m3query-local-etcd.yml) file that is used for an embedded etcd cluster within M3DB.
Before setting up m3query, make sure that you have at least [one M3DB node running](/how_to/single_node). In order to start m3query, you need to configure a `yaml` file, that will be used to connect to M3DB. Here is a link to a [sample config](https://github.com/m3db/m3/blob/master/src/query/config/m3query-local-etcd.yml) file that is used for an embedded etcd cluster within M3DB.

### Running

Expand All @@ -24,11 +24,11 @@ Or you can run it with Docker using the Docker file located at `$GOPATH/src/gith

### Namespaces

All namespaces that you wish to query from must be configured when [setting up M3DB](single_node). If you wish to add or change an existing namespace, please follow the namespace operational guide [here](../operational_guide/namespace_configuration).
All namespaces that you wish to query from must be configured when [setting up M3DB](/how_to/single_node). If you wish to add or change an existing namespace, please follow the namespace operational guide [here](/operational_guide/namespace_configuration).

### etcd

The configuration file linked above uses an embedded etcd cluster, which is fine for development purposes. However, if you wish to use this in production, you will want an [external etcd](../operational_guide/etcd) cluster.
The configuration file linked above uses an embedded etcd cluster, which is fine for development purposes. However, if you wish to use this in production, you will want an [external etcd](/operational_guide/etcd) cluster.

<!-- TODO: link to etcd operational guide -->

Expand Down
10 changes: 5 additions & 5 deletions docs/content/how_to/single_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ docker pull quay.io/m3db/m3dbnode:latest
docker run -p 7201:7201 -p 7203:7203 -p 9003:9003 --name m3db -v $(pwd)/m3db_data:/var/lib/m3db quay.io/m3db/m3dbnode:latest
```

**Note:** For the single node case, we use this [sample config file](https://github.com/m3db/m3/blob/master/src/dbnode/config/m3dbnode-local-etcd.yml). If you inspect the file, you'll see that all the configuration is grouped by `coordinator` or `db`. That's because this setup runs `M3DB` and `M3Coordinator` as one application. While this is convenient for testing and development, you'll want to run clustered `M3DB` with a separate `M3Coordinator` in production. You can read more about that [here.](cluster_hard_way).
**Note:** For the single node case, we use this [sample config file](https://github.com/m3db/m3/blob/master/src/dbnode/config/m3dbnode-local-etcd.yml). If you inspect the file, you'll see that all the configuration is grouped by `coordinator` or `db`. That's because this setup runs `M3DB` and `M3Coordinator` as one application. While this is convenient for testing and development, you'll want to run clustered `M3DB` with a separate `M3Coordinator` in production. You can read more about that [here.](/how_to/cluster_hard_way).

Next, create an initial namespace for your metrics in the database using the cURL below. Keep in mind that the provided `namespaceName` must match the namespace in the `local` section of the `M3Coordinator` YAML configuration, and if you choose to [add any additional namespaces](../operational_guide/namespace_configuration) you'll need to add them to the `local` section of `M3Coordinator`'s YAML config as well.
Next, create an initial namespace for your metrics in the database using the cURL below. Keep in mind that the provided `namespaceName` must match the namespace in the `local` section of the `M3Coordinator` YAML configuration, and if you choose to [add any additional namespaces](/operational_guide/namespace_configuration) you'll need to add them to the `local` section of `M3Coordinator`'s YAML config as well.

<!-- TODO: Retention actually different -->

Expand All @@ -35,7 +35,7 @@ curl -X POST http://localhost:7201/api/v1/database/create -d '{
}'
```

**Note**: The `api/v1/database/create` endpoint is abstraction over two concepts in M3DB called [placements](../operational_guide/placement) and [namespaces](../operational_guide/namespace_configuration). If a placement doesn't exist, it will create one based on the `type` argument, otherwise if the placement already exists, it just creates the specified namespace. For now it's enough to just understand that it creates M3DB namespaces (tables), but if you're going to run a clustered M3 setup in production, make sure you familiarize yourself with the links above.
**Note**: The `api/v1/database/create` endpoint is abstraction over two concepts in M3DB called [placements](/operational_guide/placement) and [namespaces](/operational_guide/namespace_configuration). If a placement doesn't exist, it will create one based on the `type` argument, otherwise if the placement already exists, it just creates the specified namespace. For now it's enough to just understand that it creates M3DB namespaces (tables), but if you're going to run a clustered M3 setup in production, make sure you familiarize yourself with the links above.

Placement initialization may take a minute or two and you can check on the status of this by running the following:

Expand Down Expand Up @@ -92,7 +92,7 @@ curl -sS -X POST http://localhost:9003/writetagged -d '{

**Note:** In the above example we include the tag `__name__`. This is because `__name__` is a
reserved tag in Prometheus and will make querying the metric much easier. For example, if you have
[M3Query](query) setup as a Prometheus datasource in Grafana, you can then query for the metric
[M3Query](/how_to/query) setup as a Prometheus datasource in Grafana, you can then query for the metric
using the following PromQL query:

```shell
Expand Down Expand Up @@ -144,4 +144,4 @@ curl -sS -X POST http://localhost:9003/query -d '{
}
```

Now that you've got the M3 stack up and running, take a look at the rest of our documentation to see how you can integrate with [Prometheus](../integrations/prometheus) and [Graphite](../integrations/graphite)
Now that you've got the M3 stack up and running, take a look at the rest of our documentation to see how you can integrate with [Prometheus](/integrations/prometheus) and [Graphite](/integrations/graphite)
4 changes: 2 additions & 2 deletions docs/content/how_to/use_as_tsdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ title: Using M3DB as a general purpose time series database

## Overview

M3 has native integrations that make it particularly easy to use it as a metrics storage for [Prometheus](../integrations/prometheus) and [Graphite](../integrations/graphite). M3DB can also be used as a general purpose distributed time series database by itself.
M3 has native integrations that make it particularly easy to use it as a metrics storage for [Prometheus](/integrations/prometheus) and [Graphite](/integrations/graphite). M3DB can also be used as a general purpose distributed time series database by itself.

## Data Model

### IDs and Tags

M3DB's data model allows multiple namespaces, each of which can be [configured and tuned independently](../operational_guide/namespace_configuration).
M3DB's data model allows multiple namespaces, each of which can be [configured and tuned independently](/operational_guide/namespace_configuration).

Each namespace can also be configured with its own schema (see "Schema Modeling" section below).

Expand Down
4 changes: 2 additions & 2 deletions docs/content/integrations/graphite.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ M3 supports ingesting Graphite metrics using the [Carbon plaintext protocol](htt

## Ingestion

Setting up the M3 stack to ingest carbon metrics is straightforward. First, make sure you've followed our [other documentation](../how_to/single_node) to get m3coordinator and M3DB setup. Also, familiarize yourself with how M3 [handles aggregation](../how_to/query).
Setting up the M3 stack to ingest carbon metrics is straightforward. First, make sure you've followed our [other documentation](/how_to/single_node) to get m3coordinator and M3DB setup. Also, familiarize yourself with how M3 [handles aggregation](/how_to/query).

Once you have both of those services running properly, modify your m3coordinator configuration to add the following lines and restart it:

Expand All @@ -21,7 +21,7 @@ carbon:
listenAddress: "0.0.0.0:7204"
```
This will enable a line-based TCP carbon ingestion server on the specified port. By default, the server will write all carbon metrics to every aggregated namespace specified in the m3coordinator [configuration file](../how_to/query) and aggregate them using a default strategy of `mean` (equivalent to Graphite's `Average`).
This will enable a line-based TCP carbon ingestion server on the specified port. By default, the server will write all carbon metrics to every aggregated namespace specified in the m3coordinator [configuration file](/how_to/query) and aggregate them using a default strategy of `mean` (equivalent to Graphite's `Average`).

This default setup makes sense if your carbon metrics are unaggregated, however, if you've already aggregated your data using something like [statsite](https://github.com/statsite/statsite) then you may want to disable M3 aggregation. In that case, you can do something like the following:

Expand Down
2 changes: 1 addition & 1 deletion docs/content/integrations/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ To write to a remote M3DB cluster the simplest configuration is to run `m3coordi

Start by downloading the [config template](https://github.com/m3db/m3/blob/master/src/query/config/m3coordinator-cluster-template.yml). Update the `namespaces` and the `client` section for a new cluster to match your cluster's configuration.

You'll need to specify the static IPs or hostnames of your M3DB seed nodes, and the name and retention values of the namespace you set up. You can leave the namespace storage metrics type as `unaggregated` since it's required by default to have a cluster that receives all Prometheus metrics unaggregated. In the future you might also want to aggregate and downsample metrics for longer retention, and you can come back and update the config once you've setup those clusters. You can read more about our aggregation functionality [here](../how_to/query).
You'll need to specify the static IPs or hostnames of your M3DB seed nodes, and the name and retention values of the namespace you set up. You can leave the namespace storage metrics type as `unaggregated` since it's required by default to have a cluster that receives all Prometheus metrics unaggregated. In the future you might also want to aggregate and downsample metrics for longer retention, and you can come back and update the config once you've setup those clusters. You can read more about our aggregation functionality [here](/how_to/query).

It should look something like:

Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ While reading it, we recommend referring to [the default configuration file](htt
We recommend running the client with `writeConsistencyLevel` set to `majority` and `readConsistencyLevel` set to `unstrict_majority`.
This means that all write must be acknowledged by a quorums of nodes in order to be considered succesful, and that reads will attempt to achieve quorum, but will return the data from a single node if they are unable to achieve quorum. This ensures that reads will normally ensure consistency, but degraded conditions will cause reads to fail outright as long as at least a single node can satisfy the request.

You can read about the consistency levels in more detail in [the Consistency Levels section](../m3db/architecture/consistencylevels)
You can read about the consistency levels in more detail in [the Consistency Levels section](/m3db/architecture/consistencylevels)

### Commitlog Configuration

Expand Down Expand Up @@ -88,7 +88,7 @@ This issue requires an operator with significant M3DB operational experience to

The most important thing to understand is that **if you want to guarantee that you will be able to read the result of every successful write, then both writes and reads must be done with `majority` consistency.**
This means that both writes _and_ reads will fail if a quorum of nodes are unavailable for a given shard.
You can read about the consistency levels in more detail in [the Consistency Levels section](../m3db/architecture/consistencylevels)
You can read about the consistency levels in more detail in [the Consistency Levels section](/m3db/architecture/consistencylevels)

### Commitlog Configuration

Expand Down
Loading

0 comments on commit 8c300d5

Please sign in to comment.