Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

replace autolinks in release-7.1 #14717

Merged
merged 1 commit into from
Sep 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Your Pull Requests can only be merged after you sign the [Contributor License Ag

### Step 1: Fork the repository

1. Visit the project: <https://github.com/pingcap/docs>
1. Visit the project: [https://github.com/pingcap/docs](https://github.com/pingcap/docs)
2. Click the **Fork** button on the top right and wait it to finish.

### Step 2: Clone the forked repository to local storage
Expand Down Expand Up @@ -149,7 +149,7 @@ git push -u origin new-branch-name # "-u" is used to track the remote branch fro

### Step 8: Create a pull request

1. Visit your fork at <https://github.com/$user/docs> (replace `$user` with your GitHub ID)
1. Visit your fork at [https://github.com/$user/docs](https://github.com/$user/docs) (replace `$user` with your GitHub ID)
2. Click the `Compare & pull request` button next to your `new-branch-name` branch to create your PR. See [Pull Request Title Style](https://github.com/pingcap/community/blob/master/contributors/commit-message-pr-style.md#pull-request-title-style).

Now, your PR is successfully submitted! After this PR is merged, you will automatically become a contributor to TiDB documentation.
Expand Down
2 changes: 1 addition & 1 deletion best-practices/tidb-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ There are lots of items in the monitoring system, the majority of which are for

In addition to monitoring, you can also view the system logs. The three components of TiDB, tidb-server, tikv-server, and pd-server, each has a `--log-file` parameter. If this parameter has been configured when the cluster is started, logs are stored in the file configured by the parameter and log files are automatically archived on a daily basis. If the `--log-file` parameter has not been configured, the log is output to `stderr`.

Starting from TiDB 4.0, TiDB provides [TiDB Dashboard](/dashboard/dashboard-intro.md) UI to improve usability. You can access TiDB Dashboard by visiting <http://${PD_IP}:${PD_PORT}/dashboard> in your browser. TiDB Dashboard provides features such as viewing cluster status, performance analysis, traffic visualization, cluster diagnostics, and log searching.
Starting from TiDB 4.0, TiDB provides [TiDB Dashboard](/dashboard/dashboard-intro.md) UI to improve usability. You can access TiDB Dashboard by visiting [http://${PD_IP}:${PD_PORT}/dashboard](http://${PD_IP}:${PD_PORT}/dashboard) in your browser. TiDB Dashboard provides features such as viewing cluster status, performance analysis, traffic visualization, cluster diagnostics, and log searching.

### Documentation

Expand Down
2 changes: 1 addition & 1 deletion dashboard/continuous-profiling.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ You can access the Continuous Profiling page using either of the following metho

![Access page](/media/dashboard/dashboard-conprof-access.png)

* Visit <http://127.0.0.1:2379/dashboard/#/continuous_profiling> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/continuous_profiling](http://127.0.0.1:2379/dashboard/#/continuous_profiling) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

## Enable Continuous Profiling

Expand Down
4 changes: 2 additions & 2 deletions dashboard/dashboard-access.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@ summary: Learn how to access TiDB Dashboard.

# Access TiDB Dashboard

To access TiDB Dashboard, visit <http://127.0.0.1:2379/dashboard> via your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
To access TiDB Dashboard, visit [http://127.0.0.1:2379/dashboard](http://127.0.0.1:2379/dashboard) via your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

> **Note:**
>
> TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. For details, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently).

## Access TiDB Dashboard when multiple PD instances are deployed

When multiple multiple PD instances are deployed in your cluster and you can directly access **every** PD instance and port, you can simply replace `127.0.0.1:2379` in the <http://127.0.0.1:2379/dashboard/> address with **any** PD instance address and port.
When multiple multiple PD instances are deployed in your cluster and you can directly access **every** PD instance and port, you can simply replace `127.0.0.1:2379` in the [http://127.0.0.1:2379/dashboard/](http://127.0.0.1:2379/dashboard/) address with **any** PD instance address and port.

> **Note:**
>
Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-cluster-info.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ You can use one of the following two methods to access the cluster information p

* After logging in to TiDB Dashboard, click **Cluster Info** in the left navigation menu.

* Visit <http://127.0.0.1:2379/dashboard/#/cluster_info/instance> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/cluster_info/instance](http://127.0.0.1:2379/dashboard/#/cluster_info/instance) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

## Instance list

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-key-visualizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ You can use one of the following two methods to access the Key Visualizer page:

![Access Key Visualizer](/media/dashboard/dashboard-keyviz-access-v650.png)

* Visit <http://127.0.0.1:2379/dashboard/#/keyviz> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/keyviz](http://127.0.0.1:2379/dashboard/#/keyviz) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

## Interface demonstration

Expand Down
6 changes: 3 additions & 3 deletions dashboard/dashboard-ops-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ It is recommended that you create a least-privileged SQL user to access and sign
>
> TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. This port does not communicate with other privileged interfaces of PD and no extra firewall is required if provided externally. For details, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently).

TiDB Dashboard provides services through the PD client port, which defaults to <http://IP:2379/dashboard/>. Although TiDB Dashboard requires identity authentication, other privileged interfaces (such as <http://IP:2379/pd/api/v1/members>) in PD carried on the PD client port do not require identity authentication and can perform privileged operations. Therefore, exposing the PD client port directly to the external network is extremely risky.
TiDB Dashboard provides services through the PD client port, which defaults to [http://IP:2379/dashboard/](http://IP:2379/dashboard/). Although TiDB Dashboard requires identity authentication, other privileged interfaces (such as [http://IP:2379/pd/api/v1/members](http://IP:2379/pd/api/v1/members)) in PD carried on the PD client port do not require identity authentication and can perform privileged operations. Therefore, exposing the PD client port directly to the external network is extremely risky.

It is recommended that you take the following measures:

Expand Down Expand Up @@ -79,11 +79,11 @@ The following is a sample output:
http://192.168.0.123:2379/dashboard/
```

In this example, the firewall needs to be configured with inbound access for the `2379` port of the `192.168.0.123` open IP, and the TiDB Dashboard is accessed via <http://192.168.0.123:2379/dashboard/>.
In this example, the firewall needs to be configured with inbound access for the `2379` port of the `192.168.0.123` open IP, and the TiDB Dashboard is accessed via [http://192.168.0.123:2379/dashboard/](http://192.168.0.123:2379/dashboard/).

## Reverse proxy only for TiDB Dashboard

As mentioned in [Use a firewall to block untrusted access](#use-a-firewall-to-block-untrusted access), the services provided under the PD client port include not only TiDB Dashboard (located at <http://IP:2379/dashboard/>), but also other privileged interfaces in PD (such as <http://IP:2379/pd/api/v1/members>). Therefore, when using a reverse proxy to provide TiDB Dashboard to the external network, ensure that the services **ONLY** with the `/dashboard` prefix are provided (**NOT** all services under the port) to avoid that the external network can access the privileged interface in PD through the reverse proxy.
As mentioned in [Use a firewall to block untrusted access](#use-a-firewall-to-block-untrusted access), the services provided under the PD client port include not only TiDB Dashboard (located at [http://IP:2379/dashboard/](http://IP:2379/dashboard/)), but also other privileged interfaces in PD (such as [http://IP:2379/pd/api/v1/members](http://IP:2379/pd/api/v1/members)). Therefore, when using a reverse proxy to provide TiDB Dashboard to the external network, ensure that the services **ONLY** with the `/dashboard` prefix are provided (**NOT** all services under the port) to avoid that the external network can access the privileged interface in PD through the reverse proxy.

It is recommended that you see [Use TiDB Dashboard behind a Reverse Proxy](/dashboard/dashboard-ops-reverse-proxy.md) to learn a safe and recommended reverse proxy configuration.

Expand Down
2 changes: 1 addition & 1 deletion dashboard/dashboard-profiling.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ You can access the instance profiling page using either of the following methods

![Access instance profiling page](/media/dashboard/dashboard-profiling-access.png)

* Visit <http://127.0.0.1:2379/dashboard/#/instance_profiling> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/instance_profiling](http://127.0.0.1:2379/dashboard/#/instance_profiling) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

## Start Profiling

Expand Down
6 changes: 3 additions & 3 deletions dashboard/dashboard-resource-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ You can use one of the following two methods to access the Resource Manager page

* After logging in to TiDB Dashboard, click **Resource Manager** in the left navigation menu.

* Visit <http://127.0.0.1:2379/dashboard/#/resource_manager> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/resource_manager](http://127.0.0.1:2379/dashboard/#/resource_manager) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

## Resource Manager page

Expand All @@ -37,9 +37,9 @@ The Resource Manager page contains the following three sections:
Before resource planning, you need to know the overall capacity of the cluster. TiDB provides two methods to estimate the capacity of [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru#what-is-request-unit-ru) in the current cluster:

- [Estimate capacity based on hardware deployment](/sql-statements/sql-statement-calibrate-resource.md#estimate-capacity-based-on-hardware-deployment)

TiDB accepts the following workload types:

- `tpcc`: applies to workloads with heavy data write. It is estimated based on a workload model similar to `TPC-C`.
- `oltp_write_only`: applies to workloads with heavy data write. It is estimated based on a workload model similar to `sysbench oltp_write_only`.
- `oltp_read_write`: applies to workloads with even data read and write. It is estimated based on a workload model similar to `sysbench oltp_read_write`.
Expand Down
4 changes: 2 additions & 2 deletions dashboard/dashboard-slow-query.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ You can use one of the following two methods to access the slow query page:

* After logging in to TiDB Dashboard, click **Slow Queries** in the left navigation menu.

* Visit <http://127.0.0.1:2379/dashboard/#/slow_query> in your browser. Replace `127.0.0.1:2379` with the actual PD address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/slow_query](http://127.0.0.1:2379/dashboard/#/slow_query) in your browser. Replace `127.0.0.1:2379` with the actual PD address and port.

All data displayed on the slow query page comes from TiDB slow query system tables and slow query logs. See [slow query logs](/identify-slow-queries.md) for details.

Expand Down Expand Up @@ -74,7 +74,7 @@ The following figure shows a visual execution plan.
- The graph shows the execution from left to right, and from top to bottom.
- Upper nodes are parent operators and lower nodes are child operators.
- The color of the title bar indicates the component where the operator is executed: yellow stands for TiDB, blue stands for TiKV, and pink stands for TiFlash.
- The title bar shows the operator name and the text shown below is the basic information of the operator.
- The title bar shows the operator name and the text shown below is the basic information of the operator.

Click the node area, and the detailed operator information is displayed on the right sidebar.

Expand Down
4 changes: 2 additions & 2 deletions dashboard/dashboard-statement-list.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ You can use one of the following two methods to access the SQL statement summary

* After logging in to TiDB Dashboard, click **SQL Statements** in the left navigation menu.

* Visit <http://127.0.0.1:2379/dashboard/#/statement> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/statement](http://127.0.0.1:2379/dashboard/#/statement) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

All the data shown on the SQL statement summary page are from the TiDB statement summary tables. For more details about the tables, see [TiDB Statement Summary Tables](/statement-summary-tables.md).

> **Note:**
>
> In the **Mean Latency** column of the SQL statement summary page, the blue bar indicates the average execution time. If there is a yellow line on the blue bar for an SQL statement, the left and right sides of the yellow line respectively represent the minimum and maximum execution time of the SQL statement during the recent data collection cycle.
> In the **Mean Latency** column of the SQL statement summary page, the blue bar indicates the average execution time. If there is a yellow line on the blue bar for an SQL statement, the left and right sides of the yellow line respectively represent the minimum and maximum execution time of the SQL statement during the recent data collection cycle.

### Change Filters

Expand Down
2 changes: 1 addition & 1 deletion dashboard/top-sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ You can access the Top SQL page using either of the following methods:

![Top SQL](/media/dashboard/top-sql-access.png)

* Visit <http://127.0.0.1:2379/dashboard/#/topsql> in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.
* Visit [http://127.0.0.1:2379/dashboard/#/topsql](http://127.0.0.1:2379/dashboard/#/topsql) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port.

## Enable Top SQL

Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-playground-gitpod.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ title: Gitpod

With [Gitpod](https://www.gitpod.io/), you can get a full development environment in your browser with the click of a button or link, and you can write code right away.

Gitpod is an open-source Kubernetes application (GitHub repository address: <https://github.com/gitpod-io/gitpod>) for direct-to-code development environments, which spins up fresh, automated development environments for each task, in the cloud, in seconds. It enables you to describe your development environment as code and start instant, remote and cloud-based development environments directly from your browser or your Desktop IDE.
Gitpod is an open-source Kubernetes application (GitHub repository address: [https://github.com/gitpod-io/gitpod](https://github.com/gitpod-io/gitpod)) for direct-to-code development environments, which spins up fresh, automated development environments for each task, in the cloud, in seconds. It enables you to describe your development environment as code and start instant, remote and cloud-based development environments directly from your browser or your Desktop IDE.

## Quick start

Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-sample-application-java-spring-boot.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ If you want to learn more about the code of this application, refer to [implemen

## Step 6: HTTP requests

After the service is up and running, you can send the HTTP requests to the backend application. <http://localhost:8080> is the base URL that provides services. This tutorial uses a series of HTTP requests to show how to use the service.
After the service is up and running, you can send the HTTP requests to the backend application. [http://localhost:8080](http://localhost:8080) is the base URL that provides services. This tutorial uses a series of HTTP requests to show how to use the service.

### Step 6.1 Use Postman requests (recommended)

Expand Down
2 changes: 1 addition & 1 deletion dm/dm-daily-check.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This document summarizes how to perform a daily check on TiDB Data Migration (DM

+ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](/dm/dm-query-status.md).

+ Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to <http://172.16.10.71:3000>, enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](/dm/monitor-a-dm-cluster.md).
+ Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to [http://172.16.10.71:3000](http://172.16.10.71:3000), enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](/dm/monitor-a-dm-cluster.md).

+ Method 3: Check the running status of DM and the error (if any) using the log file.

Expand Down
2 changes: 1 addition & 1 deletion dm/migrate-data-using-dm.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ tiup dmctl --master-addr 172.16.10.71:8261 stop-task test

## Step 8: Monitor the task and check logs

Assuming that Prometheus, Alertmanager, and Grafana are successfully deployed along with the DM cluster deployment using TiUP, and the Grafana address is `172.16.10.71`. To view the alert information related to DM, you can open <http://172.16.10.71:9093> in a browser and enter into Alertmanager; to check monitoring metrics, go to <http://172.16.10.71:3000>, and choose the DM dashboard.
Assuming that Prometheus, Alertmanager, and Grafana are successfully deployed along with the DM cluster deployment using TiUP, and the Grafana address is `172.16.10.71`. To view the alert information related to DM, you can open [http://172.16.10.71:9093](http://172.16.10.71:9093) in a browser and enter into Alertmanager; to check monitoring metrics, go to [http://172.16.10.71:3000](http://172.16.10.71:3000), and choose the DM dashboard.

While the DM cluster is running, DM-master, DM-worker, and dmctl output the monitoring metrics information through logs. The log directory of each component is as follows:

Expand Down
2 changes: 1 addition & 1 deletion exporting-grafana-snapshots.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Metrics data is important in troubleshooting. When you request remote assistance

## Usage

MetricsTool can be accessed from <https://metricstool.pingcap.net/>. It consists of three sets of tools:
MetricsTool can be accessed from [https://metricstool.pingcap.net/](https://metricstool.pingcap.net/). It consists of three sets of tools:

* **Export**: A user script running on the browser's Developer Tool, allowing you to download a snapshot of all visible panels in the current dashboard on any Grafana v6.x.x server.

Expand Down
Loading
Loading