From 4f47d2a8d27330491a92f4587104a3fbdd1d8e60 Mon Sep 17 00:00:00 2001 From: Ran Date: Mon, 4 Sep 2023 18:02:49 +0800 Subject: [PATCH] replace autolinks Signed-off-by: Ran --- CONTRIBUTING.md | 4 ++-- best-practices/tidb-best-practices.md | 2 +- dashboard/continuous-profiling.md | 2 +- dashboard/dashboard-access.md | 4 ++-- dashboard/dashboard-cluster-info.md | 2 +- dashboard/dashboard-key-visualizer.md | 2 +- dashboard/dashboard-ops-security.md | 6 +++--- dashboard/dashboard-profiling.md | 2 +- dashboard/dashboard-resource-manager.md | 6 +++--- dashboard/dashboard-slow-query.md | 4 ++-- dashboard/dashboard-statement-list.md | 4 ++-- dashboard/top-sql.md | 2 +- develop/dev-guide-playground-gitpod.md | 2 +- ...-guide-sample-application-java-spring-boot.md | 2 +- dm/dm-daily-check.md | 2 +- dm/migrate-data-using-dm.md | 2 +- exporting-grafana-snapshots.md | 2 +- quick-start-with-tidb.md | 16 ++++++++-------- scale-tidb-using-tiup.md | 12 ++++++------ tidb-binlog/monitor-tidb-binlog-cluster.md | 2 +- tidb-cloud/config-s3-and-gcs-access.md | 6 +++--- tidb-cloud/data-service-oas-with-nextjs.md | 2 +- tidb-cloud/integrate-tidbcloud-with-airbyte.md | 2 +- tidb-cloud/integrate-tidbcloud-with-dbt.md | 2 +- tidb-cloud/monitor-datadog-integration.md | 2 +- ...up-private-endpoint-connections-serverless.md | 2 +- .../set-up-private-endpoint-connections.md | 2 +- tidb-troubleshooting-map.md | 4 ++-- 28 files changed, 51 insertions(+), 51 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index c9ec6c7964d64..9a06684dea224 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -88,7 +88,7 @@ Your Pull Requests can only be merged after you sign the [Contributor License Ag ### Step 1: Fork the repository -1. Visit the project: +1. Visit the project: [https://github.com/pingcap/docs](https://github.com/pingcap/docs) 2. Click the **Fork** button on the top right and wait it to finish. ### Step 2: Clone the forked repository to local storage @@ -149,7 +149,7 @@ git push -u origin new-branch-name # "-u" is used to track the remote branch fro ### Step 8: Create a pull request -1. Visit your fork at (replace `$user` with your GitHub ID) +1. Visit your fork at [https://github.com/$user/docs](https://github.com/$user/docs) (replace `$user` with your GitHub ID) 2. Click the `Compare & pull request` button next to your `new-branch-name` branch to create your PR. See [Pull Request Title Style](https://github.com/pingcap/community/blob/master/contributors/commit-message-pr-style.md#pull-request-title-style). Now, your PR is successfully submitted! After this PR is merged, you will automatically become a contributor to TiDB documentation. diff --git a/best-practices/tidb-best-practices.md b/best-practices/tidb-best-practices.md index acb2ce5fcca94..b5163ee270bf3 100644 --- a/best-practices/tidb-best-practices.md +++ b/best-practices/tidb-best-practices.md @@ -194,7 +194,7 @@ There are lots of items in the monitoring system, the majority of which are for In addition to monitoring, you can also view the system logs. The three components of TiDB, tidb-server, tikv-server, and pd-server, each has a `--log-file` parameter. If this parameter has been configured when the cluster is started, logs are stored in the file configured by the parameter and log files are automatically archived on a daily basis. If the `--log-file` parameter has not been configured, the log is output to `stderr`. -Starting from TiDB 4.0, TiDB provides [TiDB Dashboard](/dashboard/dashboard-intro.md) UI to improve usability. You can access TiDB Dashboard by visiting in your browser. TiDB Dashboard provides features such as viewing cluster status, performance analysis, traffic visualization, cluster diagnostics, and log searching. +Starting from TiDB 4.0, TiDB provides [TiDB Dashboard](/dashboard/dashboard-intro.md) UI to improve usability. You can access TiDB Dashboard by visiting [http://${PD_IP}:${PD_PORT}/dashboard](http://${PD_IP}:${PD_PORT}/dashboard) in your browser. TiDB Dashboard provides features such as viewing cluster status, performance analysis, traffic visualization, cluster diagnostics, and log searching. ### Documentation diff --git a/dashboard/continuous-profiling.md b/dashboard/continuous-profiling.md index 4b25399342dca..608974910fdf5 100644 --- a/dashboard/continuous-profiling.md +++ b/dashboard/continuous-profiling.md @@ -42,7 +42,7 @@ You can access the Continuous Profiling page using either of the following metho ![Access page](/media/dashboard/dashboard-conprof-access.png) -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/continuous_profiling](http://127.0.0.1:2379/dashboard/#/continuous_profiling) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. ## Enable Continuous Profiling diff --git a/dashboard/dashboard-access.md b/dashboard/dashboard-access.md index 8bc6722706305..ac3e73904997b 100644 --- a/dashboard/dashboard-access.md +++ b/dashboard/dashboard-access.md @@ -5,7 +5,7 @@ summary: Learn how to access TiDB Dashboard. # Access TiDB Dashboard -To access TiDB Dashboard, visit via your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +To access TiDB Dashboard, visit [http://127.0.0.1:2379/dashboard](http://127.0.0.1:2379/dashboard) via your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. > **Note:** > @@ -13,7 +13,7 @@ To access TiDB Dashboard, visit via your brows ## Access TiDB Dashboard when multiple PD instances are deployed -When multiple multiple PD instances are deployed in your cluster and you can directly access **every** PD instance and port, you can simply replace `127.0.0.1:2379` in the address with **any** PD instance address and port. +When multiple multiple PD instances are deployed in your cluster and you can directly access **every** PD instance and port, you can simply replace `127.0.0.1:2379` in the [http://127.0.0.1:2379/dashboard/](http://127.0.0.1:2379/dashboard/) address with **any** PD instance address and port. > **Note:** > diff --git a/dashboard/dashboard-cluster-info.md b/dashboard/dashboard-cluster-info.md index fe1eb57c61c4b..056873cad3f2c 100644 --- a/dashboard/dashboard-cluster-info.md +++ b/dashboard/dashboard-cluster-info.md @@ -13,7 +13,7 @@ You can use one of the following two methods to access the cluster information p * After logging in to TiDB Dashboard, click **Cluster Info** in the left navigation menu. -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/cluster_info/instance](http://127.0.0.1:2379/dashboard/#/cluster_info/instance) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. ## Instance list diff --git a/dashboard/dashboard-key-visualizer.md b/dashboard/dashboard-key-visualizer.md index b13fd1ba6e5a3..33cf8ade056a4 100644 --- a/dashboard/dashboard-key-visualizer.md +++ b/dashboard/dashboard-key-visualizer.md @@ -15,7 +15,7 @@ You can use one of the following two methods to access the Key Visualizer page: ![Access Key Visualizer](/media/dashboard/dashboard-keyviz-access-v650.png) -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/keyviz](http://127.0.0.1:2379/dashboard/#/keyviz) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. ## Interface demonstration diff --git a/dashboard/dashboard-ops-security.md b/dashboard/dashboard-ops-security.md index 63b9ac697fa32..08a2dd6ef0a9a 100644 --- a/dashboard/dashboard-ops-security.md +++ b/dashboard/dashboard-ops-security.md @@ -27,7 +27,7 @@ It is recommended that you create a least-privileged SQL user to access and sign > > TiDB v6.5.0 (and later) and TiDB Operator v1.4.0 (and later) support deploying TiDB Dashboard as an independent Pod on Kubernetes. Using TiDB Operator, you can access the IP address of this Pod to start TiDB Dashboard. This port does not communicate with other privileged interfaces of PD and no extra firewall is required if provided externally. For details, see [Deploy TiDB Dashboard independently in TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/dev/get-started#deploy-tidb-dashboard-independently). -TiDB Dashboard provides services through the PD client port, which defaults to . Although TiDB Dashboard requires identity authentication, other privileged interfaces (such as ) in PD carried on the PD client port do not require identity authentication and can perform privileged operations. Therefore, exposing the PD client port directly to the external network is extremely risky. +TiDB Dashboard provides services through the PD client port, which defaults to [http://IP:2379/dashboard/](http://IP:2379/dashboard/). Although TiDB Dashboard requires identity authentication, other privileged interfaces (such as [http://IP:2379/pd/api/v1/members](http://IP:2379/pd/api/v1/members)) in PD carried on the PD client port do not require identity authentication and can perform privileged operations. Therefore, exposing the PD client port directly to the external network is extremely risky. It is recommended that you take the following measures: @@ -79,11 +79,11 @@ The following is a sample output: http://192.168.0.123:2379/dashboard/ ``` -In this example, the firewall needs to be configured with inbound access for the `2379` port of the `192.168.0.123` open IP, and the TiDB Dashboard is accessed via . +In this example, the firewall needs to be configured with inbound access for the `2379` port of the `192.168.0.123` open IP, and the TiDB Dashboard is accessed via [http://192.168.0.123:2379/dashboard/](http://192.168.0.123:2379/dashboard/). ## Reverse proxy only for TiDB Dashboard -As mentioned in [Use a firewall to block untrusted access](#use-a-firewall-to-block-untrusted access), the services provided under the PD client port include not only TiDB Dashboard (located at ), but also other privileged interfaces in PD (such as ). Therefore, when using a reverse proxy to provide TiDB Dashboard to the external network, ensure that the services **ONLY** with the `/dashboard` prefix are provided (**NOT** all services under the port) to avoid that the external network can access the privileged interface in PD through the reverse proxy. +As mentioned in [Use a firewall to block untrusted access](#use-a-firewall-to-block-untrusted access), the services provided under the PD client port include not only TiDB Dashboard (located at [http://IP:2379/dashboard/](http://IP:2379/dashboard/)), but also other privileged interfaces in PD (such as [http://IP:2379/pd/api/v1/members](http://IP:2379/pd/api/v1/members)). Therefore, when using a reverse proxy to provide TiDB Dashboard to the external network, ensure that the services **ONLY** with the `/dashboard` prefix are provided (**NOT** all services under the port) to avoid that the external network can access the privileged interface in PD through the reverse proxy. It is recommended that you see [Use TiDB Dashboard behind a Reverse Proxy](/dashboard/dashboard-ops-reverse-proxy.md) to learn a safe and recommended reverse proxy configuration. diff --git a/dashboard/dashboard-profiling.md b/dashboard/dashboard-profiling.md index 867c8d3aab6f3..b0a3c4c9e230f 100644 --- a/dashboard/dashboard-profiling.md +++ b/dashboard/dashboard-profiling.md @@ -37,7 +37,7 @@ You can access the instance profiling page using either of the following methods ![Access instance profiling page](/media/dashboard/dashboard-profiling-access.png) -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/instance_profiling](http://127.0.0.1:2379/dashboard/#/instance_profiling) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. ## Start Profiling diff --git a/dashboard/dashboard-resource-manager.md b/dashboard/dashboard-resource-manager.md index ab4e88e1ceb34..539da9758e4a1 100644 --- a/dashboard/dashboard-resource-manager.md +++ b/dashboard/dashboard-resource-manager.md @@ -13,7 +13,7 @@ You can use one of the following two methods to access the Resource Manager page * After logging in to TiDB Dashboard, click **Resource Manager** in the left navigation menu. -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/resource_manager](http://127.0.0.1:2379/dashboard/#/resource_manager) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. ## Resource Manager page @@ -37,9 +37,9 @@ The Resource Manager page contains the following three sections: Before resource planning, you need to know the overall capacity of the cluster. TiDB provides two methods to estimate the capacity of [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru#what-is-request-unit-ru) in the current cluster: - [Estimate capacity based on hardware deployment](/sql-statements/sql-statement-calibrate-resource.md#estimate-capacity-based-on-hardware-deployment) - + TiDB accepts the following workload types: - + - `tpcc`: applies to workloads with heavy data write. It is estimated based on a workload model similar to `TPC-C`. - `oltp_write_only`: applies to workloads with heavy data write. It is estimated based on a workload model similar to `sysbench oltp_write_only`. - `oltp_read_write`: applies to workloads with even data read and write. It is estimated based on a workload model similar to `sysbench oltp_read_write`. diff --git a/dashboard/dashboard-slow-query.md b/dashboard/dashboard-slow-query.md index 25ea96f2f39b6..f8f1db1363bfa 100644 --- a/dashboard/dashboard-slow-query.md +++ b/dashboard/dashboard-slow-query.md @@ -19,7 +19,7 @@ You can use one of the following two methods to access the slow query page: * After logging in to TiDB Dashboard, click **Slow Queries** in the left navigation menu. -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/slow_query](http://127.0.0.1:2379/dashboard/#/slow_query) in your browser. Replace `127.0.0.1:2379` with the actual PD address and port. All data displayed on the slow query page comes from TiDB slow query system tables and slow query logs. See [slow query logs](/identify-slow-queries.md) for details. @@ -74,7 +74,7 @@ The following figure shows a visual execution plan. - The graph shows the execution from left to right, and from top to bottom. - Upper nodes are parent operators and lower nodes are child operators. - The color of the title bar indicates the component where the operator is executed: yellow stands for TiDB, blue stands for TiKV, and pink stands for TiFlash. -- The title bar shows the operator name and the text shown below is the basic information of the operator. +- The title bar shows the operator name and the text shown below is the basic information of the operator. Click the node area, and the detailed operator information is displayed on the right sidebar. diff --git a/dashboard/dashboard-statement-list.md b/dashboard/dashboard-statement-list.md index 971a86be78ad0..9c16ee4a18a50 100644 --- a/dashboard/dashboard-statement-list.md +++ b/dashboard/dashboard-statement-list.md @@ -15,13 +15,13 @@ You can use one of the following two methods to access the SQL statement summary * After logging in to TiDB Dashboard, click **SQL Statements** in the left navigation menu. -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/statement](http://127.0.0.1:2379/dashboard/#/statement) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. All the data shown on the SQL statement summary page are from the TiDB statement summary tables. For more details about the tables, see [TiDB Statement Summary Tables](/statement-summary-tables.md). > **Note:** > -> In the **Mean Latency** column of the SQL statement summary page, the blue bar indicates the average execution time. If there is a yellow line on the blue bar for an SQL statement, the left and right sides of the yellow line respectively represent the minimum and maximum execution time of the SQL statement during the recent data collection cycle. +> In the **Mean Latency** column of the SQL statement summary page, the blue bar indicates the average execution time. If there is a yellow line on the blue bar for an SQL statement, the left and right sides of the yellow line respectively represent the minimum and maximum execution time of the SQL statement during the recent data collection cycle. ### Change Filters diff --git a/dashboard/top-sql.md b/dashboard/top-sql.md index f6f0cfc6fd48b..a4e10b830b831 100644 --- a/dashboard/top-sql.md +++ b/dashboard/top-sql.md @@ -39,7 +39,7 @@ You can access the Top SQL page using either of the following methods: ![Top SQL](/media/dashboard/top-sql-access.png) -* Visit in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. +* Visit [http://127.0.0.1:2379/dashboard/#/topsql](http://127.0.0.1:2379/dashboard/#/topsql) in your browser. Replace `127.0.0.1:2379` with the actual PD instance address and port. ## Enable Top SQL diff --git a/develop/dev-guide-playground-gitpod.md b/develop/dev-guide-playground-gitpod.md index fae77fb1f7f05..ce9914ca69ae7 100644 --- a/develop/dev-guide-playground-gitpod.md +++ b/develop/dev-guide-playground-gitpod.md @@ -8,7 +8,7 @@ title: Gitpod With [Gitpod](https://www.gitpod.io/), you can get a full development environment in your browser with the click of a button or link, and you can write code right away. -Gitpod is an open-source Kubernetes application (GitHub repository address: ) for direct-to-code development environments, which spins up fresh, automated development environments for each task, in the cloud, in seconds. It enables you to describe your development environment as code and start instant, remote and cloud-based development environments directly from your browser or your Desktop IDE. +Gitpod is an open-source Kubernetes application (GitHub repository address: [https://github.com/gitpod-io/gitpod](https://github.com/gitpod-io/gitpod)) for direct-to-code development environments, which spins up fresh, automated development environments for each task, in the cloud, in seconds. It enables you to describe your development environment as code and start instant, remote and cloud-based development environments directly from your browser or your Desktop IDE. ## Quick start diff --git a/develop/dev-guide-sample-application-java-spring-boot.md b/develop/dev-guide-sample-application-java-spring-boot.md index 1d1a5b139f168..a5537b4e5c2f4 100644 --- a/develop/dev-guide-sample-application-java-spring-boot.md +++ b/develop/dev-guide-sample-application-java-spring-boot.md @@ -216,7 +216,7 @@ If you want to learn more about the code of this application, refer to [implemen ## Step 6: HTTP requests -After the service is up and running, you can send the HTTP requests to the backend application. is the base URL that provides services. This tutorial uses a series of HTTP requests to show how to use the service. +After the service is up and running, you can send the HTTP requests to the backend application. [http://localhost:8080](http://localhost:8080) is the base URL that provides services. This tutorial uses a series of HTTP requests to show how to use the service. ### Step 6.1 Use Postman requests (recommended) diff --git a/dm/dm-daily-check.md b/dm/dm-daily-check.md index 0376d91544d62..c0e50c64a0bcd 100644 --- a/dm/dm-daily-check.md +++ b/dm/dm-daily-check.md @@ -9,7 +9,7 @@ This document summarizes how to perform a daily check on TiDB Data Migration (DM + Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](/dm/dm-query-status.md). -+ Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to , enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](/dm/monitor-a-dm-cluster.md). ++ Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to [http://172.16.10.71:3000](http://172.16.10.71:3000), enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](/dm/monitor-a-dm-cluster.md). + Method 3: Check the running status of DM and the error (if any) using the log file. diff --git a/dm/migrate-data-using-dm.md b/dm/migrate-data-using-dm.md index 5cb316c28ee1a..4b0ac58cda56e 100644 --- a/dm/migrate-data-using-dm.md +++ b/dm/migrate-data-using-dm.md @@ -182,7 +182,7 @@ tiup dmctl --master-addr 172.16.10.71:8261 stop-task test ## Step 8: Monitor the task and check logs -Assuming that Prometheus, Alertmanager, and Grafana are successfully deployed along with the DM cluster deployment using TiUP, and the Grafana address is `172.16.10.71`. To view the alert information related to DM, you can open in a browser and enter into Alertmanager; to check monitoring metrics, go to , and choose the DM dashboard. +Assuming that Prometheus, Alertmanager, and Grafana are successfully deployed along with the DM cluster deployment using TiUP, and the Grafana address is `172.16.10.71`. To view the alert information related to DM, you can open [http://172.16.10.71:9093](http://172.16.10.71:9093) in a browser and enter into Alertmanager; to check monitoring metrics, go to [http://172.16.10.71:3000](http://172.16.10.71:3000), and choose the DM dashboard. While the DM cluster is running, DM-master, DM-worker, and dmctl output the monitoring metrics information through logs. The log directory of each component is as follows: diff --git a/exporting-grafana-snapshots.md b/exporting-grafana-snapshots.md index 48a0dc68e435d..2fae40be32702 100644 --- a/exporting-grafana-snapshots.md +++ b/exporting-grafana-snapshots.md @@ -14,7 +14,7 @@ Metrics data is important in troubleshooting. When you request remote assistance ## Usage -MetricsTool can be accessed from . It consists of three sets of tools: +MetricsTool can be accessed from [https://metricstool.pingcap.net/](https://metricstool.pingcap.net/). It consists of three sets of tools: * **Export**: A user script running on the browser's Developer Tool, allowing you to download a snapshot of all visible panels in the current dashboard on any Grafana v6.x.x server. diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md index 28dec29f9533b..189042f3f61a8 100644 --- a/quick-start-with-tidb.md +++ b/quick-start-with-tidb.md @@ -121,11 +121,11 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in mysql --host 127.0.0.1 --port 4000 -u root ``` -5. Access the Prometheus dashboard of TiDB at . +5. Access the Prometheus dashboard of TiDB at [http://127.0.0.1:9090](http://127.0.0.1:9090). -6. Access the [TiDB Dashboard](/dashboard/dashboard-intro.md) at . The default username is `root`, and the password is empty. +6. Access the [TiDB Dashboard](/dashboard/dashboard-intro.md) at [http://127.0.0.1:2379/dashboard](http://127.0.0.1:2379/dashboard). The default username is `root`, and the password is empty. -7. Access the Grafana dashboard of TiDB through . Both the default username and password are `admin`. +7. Access the Grafana dashboard of TiDB through [http://127.0.0.1:3000](http://127.0.0.1:3000). Both the default username and password are `admin`. 8. (Optional) [Load data to TiFlash](/tiflash/tiflash-overview.md#use-tiflash) for analysis. @@ -240,11 +240,11 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in mysql --host 127.0.0.1 --port 4000 -u root ``` -5. Access the Prometheus dashboard of TiDB at . +5. Access the Prometheus dashboard of TiDB at [http://127.0.0.1:9090](http://127.0.0.1:9090). -6. Access the [TiDB Dashboard](/dashboard/dashboard-intro.md) at . The default username is `root`, and the password is empty. +6. Access the [TiDB Dashboard](/dashboard/dashboard-intro.md) at [http://127.0.0.1:2379/dashboard](http://127.0.0.1:2379/dashboard). The default username is `root`, and the password is empty. -7. Access the Grafana dashboard of TiDB through . Both the default username and password are `admin`. +7. Access the Grafana dashboard of TiDB through [http://127.0.0.1:3000](http://127.0.0.1:3000). Both the default username and password are `admin`. 8. (Optional) [Load data to TiFlash](/tiflash/tiflash-overview.md#use-tiflash) for analysis. @@ -476,9 +476,9 @@ Other requirements for the target machine include: mysql -h 10.0.1.1 -P 4000 -u root ``` - - Access the Grafana monitoring dashboard at . The default username and password are both `admin`. + - Access the Grafana monitoring dashboard at [http://{grafana-ip}:3000](http://{grafana-ip}:3000). The default username and password are both `admin`. - - Access the [TiDB Dashboard](/dashboard/dashboard-intro.md) at . The default username is `root`, and the password is empty. + - Access the [TiDB Dashboard](/dashboard/dashboard-intro.md) at [http://{pd-ip}:2379/dashboard](http://{pd-ip}:2379/dashboard). The default username is `root`, and the password is empty. - To view the currently deployed cluster list: diff --git a/scale-tidb-using-tiup.md b/scale-tidb-using-tiup.md index bf5c99489f8cc..f9ee15182fbf1 100644 --- a/scale-tidb-using-tiup.md +++ b/scale-tidb-using-tiup.md @@ -134,7 +134,7 @@ This section exemplifies how to add a TiDB node to the `10.0.1.5` host. tiup cluster display ``` - Access the monitoring platform at using your browser to monitor the status of the cluster and the new node. + Access the monitoring platform at [http://10.0.1.5:3000](http://10.0.1.5:3000) using your browser to monitor the status of the cluster and the new node. After the scale-out, the cluster topology is as follows: @@ -190,7 +190,7 @@ This section exemplifies how to add a TiFlash node to the `10.0.1.4` host. tiup cluster display ``` - Access the monitoring platform at using your browser, and view the status of the cluster and the new node. + Access the monitoring platform at [http://10.0.1.5:3000](http://10.0.1.5:3000) using your browser, and view the status of the cluster and the new node. After the scale-out, the cluster topology is as follows: @@ -242,7 +242,7 @@ This section exemplifies how to add two TiCDC nodes to the `10.0.1.3` and `10.0. tiup cluster display ``` - Access the monitoring platform at using your browser, and view the status of the cluster and the new nodes. + Access the monitoring platform at [http://10.0.1.5:3000](http://10.0.1.5:3000) using your browser, and view the status of the cluster and the new nodes. After the scale-out, the cluster topology is as follows: @@ -318,7 +318,7 @@ This section exemplifies how to remove a TiKV node from the `10.0.1.5` host. If the node to be scaled in becomes `Tombstone`, the scale-in operation succeeds. - Access the monitoring platform at using your browser, and view the status of the cluster. + Access the monitoring platform at [http://10.0.1.5:3000](http://10.0.1.5:3000) using your browser, and view the status of the cluster. The current topology is as follows: @@ -473,7 +473,7 @@ The steps to manually clean up the replication rules in PD are below: tiup cluster display ``` - Access the monitoring platform at using your browser, and view the status of the cluster and the new nodes. + Access the monitoring platform at [http://10.0.1.5:3000](http://10.0.1.5:3000) using your browser, and view the status of the cluster and the new nodes. After the scale-out, the cluster topology is as follows: @@ -505,7 +505,7 @@ After the scale-out, the cluster topology is as follows: tiup cluster display ``` - Access the monitoring platform at using your browser, and view the status of the cluster. + Access the monitoring platform at [http://10.0.1.5:3000](http://10.0.1.5:3000) using your browser, and view the status of the cluster. The current topology is as follows: diff --git a/tidb-binlog/monitor-tidb-binlog-cluster.md b/tidb-binlog/monitor-tidb-binlog-cluster.md index 19831474c3b89..7f77c0a531e23 100644 --- a/tidb-binlog/monitor-tidb-binlog-cluster.md +++ b/tidb-binlog/monitor-tidb-binlog-cluster.md @@ -5,7 +5,7 @@ summary: Learn how to monitor the cluster version of TiDB Binlog. # TiDB Binlog Monitoring -After you have deployed TiDB Binlog successfully, you can go to the Grafana Web (default address: , default account: admin, password: admin) to check the state of Pump and Drainer. +After you have deployed TiDB Binlog successfully, you can go to the Grafana Web (default address: [http://grafana_ip:3000](http://grafana_ip:3000), default account: admin, password: admin) to check the state of Pump and Drainer. ## Monitoring metrics diff --git a/tidb-cloud/config-s3-and-gcs-access.md b/tidb-cloud/config-s3-and-gcs-access.md index 641366334d176..0511669d4a62a 100644 --- a/tidb-cloud/config-s3-and-gcs-access.md +++ b/tidb-cloud/config-s3-and-gcs-access.md @@ -35,12 +35,12 @@ Configure the bucket access for TiDB Cloud and get the Role ARN as follows: 2. In the AWS Management Console, create a managed policy for your Amazon S3 bucket. - 1. Sign in to the AWS Management Console and open the Amazon S3 console at . + 1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/). 2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use. ![Copy bucket ARN](/media/tidb-cloud/copy-bucket-arn.png) - 3. Open the IAM console at , click **Policies** in the navigation pane on the left, and then click **Create Policy**. + 3. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), click **Policies** in the navigation pane on the left, and then click **Create Policy**. ![Create a policy](/media/tidb-cloud/aws-create-policy.png) @@ -89,7 +89,7 @@ Configure the bucket access for TiDB Cloud and get the Role ARN as follows: 3. In the AWS Management Console, create an access role for TiDB Cloud and get the role ARN. - 1. In the IAM console at , click **Roles** in the navigation pane on the left, and then click **Create role**. + 1. In the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/), click **Roles** in the navigation pane on the left, and then click **Create role**. ![Create a role](/media/tidb-cloud/aws-create-role.png) diff --git a/tidb-cloud/data-service-oas-with-nextjs.md b/tidb-cloud/data-service-oas-with-nextjs.md index 29d3dd2901786..d1d3b4b3e20b3 100644 --- a/tidb-cloud/data-service-oas-with-nextjs.md +++ b/tidb-cloud/data-service-oas-with-nextjs.md @@ -206,4 +206,4 @@ To preview your application in a local development server, run the following com yarn dev ``` -You can then open in your browser and see the data from the `test.repository` database displayed on the page. +You can then open [http://localhost:3000](http://localhost:3000) in your browser and see the data from the `test.repository` database displayed on the page. diff --git a/tidb-cloud/integrate-tidbcloud-with-airbyte.md b/tidb-cloud/integrate-tidbcloud-with-airbyte.md index c6ace16dfe33b..41064f01eeefa 100644 --- a/tidb-cloud/integrate-tidbcloud-with-airbyte.md +++ b/tidb-cloud/integrate-tidbcloud-with-airbyte.md @@ -26,7 +26,7 @@ You can deploy Airbyte locally with only a few steps. docker-compose up ``` -Once you see an Airbyte banner, you can go to with the username (`airbyte`) and password (`password`) to visit the UI. +Once you see an Airbyte banner, you can go to [http://localhost:8000](http://localhost:8000) with the username (`airbyte`) and password (`password`) to visit the UI. ``` airbyte-server | ___ _ __ __ diff --git a/tidb-cloud/integrate-tidbcloud-with-dbt.md b/tidb-cloud/integrate-tidbcloud-with-dbt.md index ab90fa8111b71..5debd3c0a8c30 100644 --- a/tidb-cloud/integrate-tidbcloud-with-dbt.md +++ b/tidb-cloud/integrate-tidbcloud-with-dbt.md @@ -315,7 +315,7 @@ To generate visual documents, take the following steps: dbt docs serve ``` -3. To access the document from your browser, go to . +3. To access the document from your browser, go to [http://localhost:8080](http://localhost:8080). ## Description of profile fields diff --git a/tidb-cloud/monitor-datadog-integration.md b/tidb-cloud/monitor-datadog-integration.md index 5fb45027474b2..c07513347c50e 100644 --- a/tidb-cloud/monitor-datadog-integration.md +++ b/tidb-cloud/monitor-datadog-integration.md @@ -39,7 +39,7 @@ TiDB Cloud supports Datadog integration (beta). You can configure TiDB Cloud to ### Step 2. Install TiDB Cloud Integration in Datadog 1. Log in to [Datadog](https://app.datadoghq.com). -2. Go to the **TiDB Cloud Integration** page () in Datadog. +2. Go to the **TiDB Cloud Integration** page ([https://app.datadoghq.com/account/settings#integrations/tidb-cloud](https://app.datadoghq.com/account/settings#integrations/tidb-cloud)) in Datadog. 3. In the **Configuration** tab, click **Install Integration**. The [**TiDBCloud Cluster Overview**](https://app.datadoghq.com/dash/integration/30586/tidbcloud-cluster-overview) dashboard is displayed in your [**Dashboard List**](https://app.datadoghq.com/dashboard/lists). ## Pre-built dashboard diff --git a/tidb-cloud/set-up-private-endpoint-connections-serverless.md b/tidb-cloud/set-up-private-endpoint-connections-serverless.md index fc067b66d3baf..0195f5ae81b2a 100644 --- a/tidb-cloud/set-up-private-endpoint-connections-serverless.md +++ b/tidb-cloud/set-up-private-endpoint-connections-serverless.md @@ -56,7 +56,7 @@ To connect to your TiDB Serverless cluster via a private endpoint, follow these To use the AWS Management Console to create a VPC interface endpoint, perform the following steps: -1. Sign in to the [AWS Management Console](https://aws.amazon.com/console/) and open the Amazon VPC console at . +1. Sign in to the [AWS Management Console](https://aws.amazon.com/console/) and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). 2. Click **Endpoints** in the navigation pane, and then click **Create Endpoint** in the upper-right corner. The **Create endpoint** page is displayed. diff --git a/tidb-cloud/set-up-private-endpoint-connections.md b/tidb-cloud/set-up-private-endpoint-connections.md index e7ae14973bdf9..ff75084628416 100644 --- a/tidb-cloud/set-up-private-endpoint-connections.md +++ b/tidb-cloud/set-up-private-endpoint-connections.md @@ -88,7 +88,7 @@ Then create an AWS interface endpoint either using the AWS Management Console or To use the AWS Management Console to create a VPC interface endpoint, perform the following steps: -1. Sign in to the [AWS Management Console](https://aws.amazon.com/console/) and open the Amazon VPC console at . +1. Sign in to the [AWS Management Console](https://aws.amazon.com/console/) and open the Amazon VPC console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/). 2. Click **Endpoints** in the navigation pane, and then click **Create Endpoint** in the upper-right corner. The **Create endpoint** page is displayed. diff --git a/tidb-troubleshooting-map.md b/tidb-troubleshooting-map.md index 32e1c09ca6241..dfec89c5449ac 100644 --- a/tidb-troubleshooting-map.md +++ b/tidb-troubleshooting-map.md @@ -423,9 +423,9 @@ Check the specific cause for busy by viewing the monitor **Grafana** -> **TiKV** - 6.1.5 Inconsistent data in upstream and downstream - - Some TiDB nodes do not enable binlog. For v3.0.6 or later versions, you can check the binlog status of all the nodes by accessing the interface. For versions earlier than v3.0.6, you can check the binlog status by viewing the configuration file. + - Some TiDB nodes do not enable binlog. For v3.0.6 or later versions, you can check the binlog status of all the nodes by accessing the [http://127.0.0.1:10080/info/all](http://127.0.0.1:10080/info/all) interface. For versions earlier than v3.0.6, you can check the binlog status by viewing the configuration file. - - Some TiDB nodes go into the `ignore binlog` status. For v3.0.6 or later versions, you can check the binlog status of all the nodes by accessing the interface. For versions earlier than v3.0.6, check the TiDB log to see whether it contains the `ignore binlog` keyword. + - Some TiDB nodes go into the `ignore binlog` status. For v3.0.6 or later versions, you can check the binlog status of all the nodes by accessing the [http://127.0.0.1:10080/info/all](http://127.0.0.1:10080/info/all) interface. For versions earlier than v3.0.6, check the TiDB log to see whether it contains the `ignore binlog` keyword. - The value of the timestamp column is inconsistent in upstream and downstream.