Skip to content

Commit

Permalink
Reworking some of these broken anchors (github#1560)
Browse files Browse the repository at this point in the history
* Reworking some of these broken anchors

* missed one

* Add some fixes to try to get this to work

Co-authored-by: Charis Lam <[email protected]>
  • Loading branch information
Loquacity and charislam authored Sep 12, 2022
1 parent e0ac2fe commit d5dd529
Show file tree
Hide file tree
Showing 9 changed files with 994 additions and 853 deletions.
28 changes: 16 additions & 12 deletions cloud/integrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ CPU usage, RAM usage, and storage.

Export telemetry data to Datadog by:

1. [Creating a data exporter](#creating-a-data-exporter-for-datadog)
1. [Attaching your database service to the exporter](#attaching-a-datadog-data-exporter-to-a-service)
1. [Creating a data exporter][create-exporter-datadog]
1. [Attaching your database service to the exporter][attach-exporter-datadog]

<ExporterRegionNote />

Expand Down Expand Up @@ -58,8 +58,8 @@ documentation][datadog-docs].

Export telemetry data to AWS CloudWatch by:

1. [Creating a data exporter](#creating-a-data-exporter-for-aws-cloudwatch)
1. [Attaching your database service to the exporter](#attaching-a-cloudwatch-data-exporter-to-a-service)
1. [Creating a data exporter][create-exporter-aws]
1. [Attaching your database service to the exporter][attach-exporter-aws]

<ExporterRegionNote />

Expand Down Expand Up @@ -139,11 +139,15 @@ Delete any data exporters that you no longer need.

</procedure>

[aws-access-keys]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console
[cloudwatch]: https://aws.amazon.com/cloudwatch/
[cloudwatch-docs]: https://docs.aws.amazon.com/cloudwatch/index.html
[cloudwatch-log-naming]: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html
[datadog]: https://www.datadoghq.com
[datadog-api-key]: https://docs.datadoghq.com/account_management/api-app-keys/#add-an-api-key-or-client-token
[datadog-docs]: https://docs.datadoghq.com/
[datadog-metrics-explorer]: https://app.datadoghq.com/metric/explorer
[create-exporter-datadog]: /cloud/:currentVersion:/integrations/#export-telemetry-data-to-datadog
[attach-exporter-datadog]: /cloud/:currentVersion:/integrations/#attaching-a-datadog-data-exporter-to-a-service
[create-exporter-aws]: /cloud/:currentVersion:/integrations/#creating-a-data-exporter-for-aws-cloudwatch
[attach-exporter-aws]: /cloud/:currentVersion:/integrations/#attaching-a-cloudwatch-data-exporter-to-a-service
[aws-access-keys]: <https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console>
[cloudwatch]: <https://aws.amazon.com/cloudwatch/>
[cloudwatch-docs]: <https://docs.aws.amazon.com/cloudwatch/index.html>
[cloudwatch-log-naming]: <https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html>
[datadog]: <https://www.datadoghq.com>
[datadog-api-key]: <https://docs.datadoghq.com/account_management/api-app-keys/#add-an-api-key-or-client-token>
[datadog-docs]: <https://docs.datadoghq.com/>
[datadog-metrics-explorer]: <https://app.datadoghq.com/metric/explorer>
9 changes: 5 additions & 4 deletions cloud/service-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,11 @@ tags: [dashboard, cpu, memory, storage, disk space]
# Service metrics

You can view your service metrics from Timescale Cloud's
[metrics dashboard](#metrics-dashboard). This dashboard gives you service-level
[metrics dashboard][metrics-dashboard]. This dashboard gives you service-level
information, such as CPU, memory, and storage usage.

You can also view your query-level statistics by using the pre-installed
[`pg_stat_statements`](#query-level-statistics-with-pg-stat-statements)
extension from a PostgreSQL client.
[`pg_stat_statements`][pg-stat] extension from a PostgreSQL client.

## Metrics dashboard

Expand Down Expand Up @@ -160,5 +159,7 @@ LIMIT 5;
For more examples and detailed explanations, see the [blog post on identifying
performance bottlenecks with `pg_stat_statements`][blog-pg_stat_statements].

[blog-pg_stat_statements]: https://www.timescale.com/blog/identify-postgresql-performance-bottlenecks-with-pg_stat_statements/
[metrics-dashboard]: /cloud/:currentVersion:/service-metrics/#metrics-dashboard
[pg-stat]: /cloud/:currentVersion:/service-metrics/#query-level-statistics-with-pg-stat-statements
[blog-pg_stat_statements]: <https://www.timescale.com/blog/identify-postgresql-performance-bottlenecks-with-pg_stat_statements/>
[psql]: /timescaledb/:currentVersion:/how-to-guides/connecting/about-psql/
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,8 @@ four chunks, while the previous time intervals still include three:
<img class="main-content__illustration" src="https://s3.amazonaws.com/assets.timescale.com/docs/images/repartitioning.png" alt="Diagram showing repartitioning on a distributed hypertable"/>

This can affect queries that span the two different partitioning configurations.
For more information, see the section on [limitations of query push
down](#limitations-of-pushing-down-queries).
For more information, see the section on
[limitations of query push down][limitations].

## Replicating distributed hypertables

Expand Down Expand Up @@ -199,10 +199,11 @@ regular tables, with a few nuances. For example, if you `JOIN` a regular table
and a distributed hypertable, the access node needs to fetch the raw data from
the data nodes and perform the `JOIN` locally.

[limitations]: /timescaledb/:currentVersion:/how-to-guides/distributed-hypertables/about-distributed-hypertables/#query-push-down/
[hypertables]: /timescaledb/:currentVersion:/how-to-guides/hypertables/
[limitations-pushing-down]: #limitations-of-pushing-down-queries
[limitations-pushing-down]: #limitations-of-query-push-down
[multi-node-ha]: /timescaledb/:currentVersion:/how-to-guides/multinode-timescaledb/multinode-ha/
[multi-node]: /timescaledb/:currentVersion:/how-to-guides/multinode-timescaledb/
[random-func]: https://www.postgresql.org/docs/current/functions-math.html#FUNCTIONS-MATH-RANDOM-TABLE
[random-func]: <https://www.postgresql.org/docs/current/functions-math.html#FUNCTIONS-MATH-RANDOM-TABLE>
[space-partitioning]: /timescaledb/:currentVersion:/how-to-guides/hypertables/about-hypertables#space-partitioning
[volatility]: https://www.postgresql.org/docs/current/xfunc-volatility.html
[volatility]: <https://www.postgresql.org/docs/current/xfunc-volatility.html>
5 changes: 3 additions & 2 deletions timescaledb/how-to-guides/migrate-data/migrate-influxdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Before you start, make sure you have:
* A running instance of InfluxDB and a means to connect to it.
* An [installation of TimescaleDB][install] and a means to connect to it.
* Data in your InfluxDB instance. If you need to import some sample data for a
test, see the instructions for [importing sample data](#import-sample-data).
test, see the instructions for [importing sample data][import-data].

## Procedures

Expand Down Expand Up @@ -117,7 +117,7 @@ schema-transfer`:
outflux schema-transfer <DATABASE_NAME> <INFLUX_MEASUREMENT_NAME> \
--input-server=http://localhost:8086 \
--output-conn="dbname=tsdb user=tsdbadmin"
```
```
To transfer all measurements from the database, leave out the measurement name
argument.
Expand Down Expand Up @@ -186,6 +186,7 @@ migrate`][outflux-migrate]. Alternatively, see the command line help:
outflux migrate --help
```
[import-data]: #import-sample-data-into-influxdb
[influx-cmd]: https://docs.influxdata.com/influxdb/v1.7/tools/shell/
[install]: /install/latest/
[outflux-migrate]: https://github.com/timescale/outflux#migrate
Expand Down
Loading

0 comments on commit d5dd529

Please sign in to comment.