diff --git a/best-practices/high-concurrency-best-practices.md b/best-practices/high-concurrency-best-practices.md index fc81441dc52e8..79deb6cae3afd 100644 --- a/best-practices/high-concurrency-best-practices.md +++ b/best-practices/high-concurrency-best-practices.md @@ -9,11 +9,11 @@ This document describes best practices for handling highly-concurrent write-heav ## Target audience -This document assumes that you have a basic understanding of TiDB. It is recommended that you first read the following three blog articles that explain TiDB fundamentals, and [TiDB Best Practices](https://en.pingcap.com/blog/tidb-best-practice/): +This document assumes that you have a basic understanding of TiDB. It is recommended that you first read the following three blog articles that explain TiDB fundamentals, and [TiDB Best Practices](https://www.pingcap.com/blog/tidb-best-practice/): -+ [Data Storage](https://en.pingcap.com/blog/tidb-internal-data-storage/) -+ [Computing](https://en.pingcap.com/blog/tidb-internal-computing/) -+ [Scheduling](https://en.pingcap.com/blog/tidb-internal-scheduling/) ++ [Data Storage](https://www.pingcap.com/blog/tidb-internal-data-storage/) ++ [Computing](https://www.pingcap.com/blog/tidb-internal-computing/) ++ [Scheduling](https://www.pingcap.com/blog/tidb-internal-scheduling/) ## Highly-concurrent write-intensive scenario @@ -32,7 +32,7 @@ For a distributed database, it is important to make full use of the capacity of ## Data distribution principles in TiDB -To address the above challenges, it is necessary to start with the data segmentation and scheduling principle of TiDB. Refer to [Scheduling](https://en.pingcap.com/blog/tidb-internal-scheduling/) for more details. +To address the above challenges, it is necessary to start with the data segmentation and scheduling principle of TiDB. Refer to [Scheduling](https://www.pingcap.com/blog/tidb-internal-scheduling/) for more details. TiDB splits data into Regions, each representing a range of data with a size limit of 96M by default. Each Region has multiple replicas, and each group of replicas is called a Raft Group. In a Raft Group, the Region Leader executes the read and write tasks (TiDB supports [Follower-Read](/follower-read.md)) within the data range. The Region Leader is automatically scheduled by the Placement Driver (PD) component to different physical nodes evenly to distribute the read and write pressure. diff --git a/best-practices/tidb-best-practices.md b/best-practices/tidb-best-practices.md index acb2ce5fcca94..fd07d3084553e 100644 --- a/best-practices/tidb-best-practices.md +++ b/best-practices/tidb-best-practices.md @@ -9,9 +9,9 @@ This document summarizes the best practices of using TiDB, including the use of Before you read this document, it is recommended that you read three blog posts that introduce the technical principles of TiDB: -* [TiDB Internal (I) - Data Storage](https://en.pingcap.com/blog/tidb-internal-data-storage/) -* [TiDB Internal (II) - Computing](https://en.pingcap.com/blog/tidb-internal-computing/) -* [TiDB Internal (III) - Scheduling](https://en.pingcap.com/blog/tidb-internal-scheduling/) +* [TiDB Internal (I) - Data Storage](https://www.pingcap.com/blog/tidb-internal-data-storage/) +* [TiDB Internal (II) - Computing](https://www.pingcap.com/blog/tidb-internal-computing/) +* [TiDB Internal (III) - Scheduling](https://www.pingcap.com/blog/tidb-internal-scheduling/) ## Preface @@ -67,7 +67,7 @@ Placement Driver (PD) balances the load of the cluster according to the status o ### SQL on KV -TiDB automatically maps the SQL structure into Key-Value structure. For details, see [TiDB Internal (II) - Computing](https://en.pingcap.com/blog/tidb-internal-computing/). +TiDB automatically maps the SQL structure into Key-Value structure. For details, see [TiDB Internal (II) - Computing](https://www.pingcap.com/blog/tidb-internal-computing/). Simply put, TiDB performs the following operations: diff --git a/dashboard/dashboard-key-visualizer.md b/dashboard/dashboard-key-visualizer.md index 0ff0922d772d9..810939755a4ec 100644 --- a/dashboard/dashboard-key-visualizer.md +++ b/dashboard/dashboard-key-visualizer.md @@ -37,7 +37,7 @@ This section introduces the basic concepts that relate to Key Visualizer. In a TiDB cluster, the stored data is distributed among TiKV instances. Logically, TiKV is a huge and orderly key-value map. The whole key-value space is divided into many segments and each segment consists of a series of adjacent keys. Such segment is called a `Region`. -For detailed introduction of Region, refer to [TiDB Internal (I) - Data Storage](https://en.pingcap.com/blog/tidb-internal-data-storage/). +For detailed introduction of Region, refer to [TiDB Internal (I) - Data Storage](https://www.pingcap.com/blog/tidb-internal-data-storage/). ### Hotspot diff --git a/explore-htap.md b/explore-htap.md index 18f172fa93e90..a5ffeb79725c7 100644 --- a/explore-htap.md +++ b/explore-htap.md @@ -29,7 +29,7 @@ The following are the typical use cases of HTAP: When using TiDB as a data hub, TiDB can meet specific business needs by seamlessly connecting the data for the application and the data warehouse. -For more information about use cases of TiDB HTAP, see [blogs about HTAP on the PingCAP website](https://en.pingcap.com/blog/?tag=htap). +For more information about use cases of TiDB HTAP, see [blogs about HTAP on the PingCAP website](https://www.pingcap.com/blog/?tag=htap). ## Architecture diff --git a/faq/migration-tidb-faq.md b/faq/migration-tidb-faq.md index c6853103a0a5e..6b37ac85f832a 100644 --- a/faq/migration-tidb-faq.md +++ b/faq/migration-tidb-faq.md @@ -170,13 +170,13 @@ Yes. But the `load data` does not support the `replace into` syntax. ### Why does the query speed getting slow after deleting data? -Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://en.pingcap.com/blog/tidb-best-practice/#write). +Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the Region Merge feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://www.pingcap.com/blog/tidb-best-practice/#write). ### What is the most efficient way of deleting data? When deleting a large amount of data, it is recommended to use `Delete from t where xx limit 5000;`. It deletes through the loop and uses `Affected Rows == 0` as a condition to end the loop, so as not to exceed the limit of transaction size. With the prerequisite of meeting business filtering logic, it is recommended to add a strong filter index column or directly use the primary key to select the range, such as `id >= 5000*n+m and id < 5000*(n+1)+m`. -If the amount of data that needs to be deleted at a time is very large, this loop method will get slower and slower because each deletion traverses backward. After deleting the previous data, lots of deleted flags remain for a short period (then all will be processed by Garbage Collection) and influence the following Delete statement. If possible, it is recommended to refine the Where condition. See [details in TiDB Best Practices](https://en.pingcap.com/blog/tidb-best-practice/#write). +If the amount of data that needs to be deleted at a time is very large, this loop method will get slower and slower because each deletion traverses backward. After deleting the previous data, lots of deleted flags remain for a short period (then all will be processed by Garbage Collection) and influence the following Delete statement. If possible, it is recommended to refine the Where condition. See [details in TiDB Best Practices](https://www.pingcap.com/blog/tidb-best-practice/#write). ### How to improve the data loading speed in TiDB? diff --git a/faq/sql-faq.md b/faq/sql-faq.md index af9d9baa7107b..473d92c591bcd 100644 --- a/faq/sql-faq.md +++ b/faq/sql-faq.md @@ -130,7 +130,7 @@ Yes. The exception being that `LOAD DATA` does not currently support the `REPLAC ## Why does the query speed get slow after data is deleted? -Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the [Region Merge](/best-practices/massive-regions-best-practices.md) feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://en.pingcap.com/blog/tidb-best-practice/#write). +Deleting a large amount of data leaves a lot of useless keys, affecting the query efficiency. Currently the [Region Merge](/best-practices/massive-regions-best-practices.md) feature is in development, which is expected to solve this problem. For details, see the [deleting data section in TiDB Best Practices](https://www.pingcap.com/blog/tidb-best-practice/#write). ## What should I do if it is slow to reclaim storage space after deleting data? diff --git a/releases/versioning.md b/releases/versioning.md index a87346f18dc49..b25e950157c97 100644 --- a/releases/versioning.md +++ b/releases/versioning.md @@ -14,7 +14,7 @@ TiDB offers two release series: * Long-Term Support Releases * Development Milestone Releases (introduced in TiDB v6.0.0) -To learn about the support policy for major releases of TiDB, see [TiDB Release Support Policy](https://en.pingcap.com/tidb-release-support-policy/). +To learn about the support policy for major releases of TiDB, see [TiDB Release Support Policy](https://www.pingcap.com/tidb-release-support-policy/). ## Release versioning diff --git a/telemetry.md b/telemetry.md index 011be2f8dc86e..a49eea9251d9b 100644 --- a/telemetry.md +++ b/telemetry.md @@ -261,4 +261,4 @@ To meet compliance requirements in different countries or regions, the usage inf - For IP addresses from the Chinese mainland, usage information is sent to and stored on cloud servers in the Chinese mainland. - For IP addresses from outside of the Chinese mainland, usage information is sent to and stored on cloud servers in the US. -See [PingCAP Privacy Policy](https://en.pingcap.com/privacy-policy/) for details. +See [PingCAP Privacy Policy](https://www.pingcap.com/privacy-policy/) for details. diff --git a/tidb-cloud/release-notes-2021.md b/tidb-cloud/release-notes-2021.md index cfc92acea4ef7..9198d8b0322de 100644 --- a/tidb-cloud/release-notes-2021.md +++ b/tidb-cloud/release-notes-2021.md @@ -91,7 +91,7 @@ Bug fixes: ## June 25, 2021 -* Fix the **Select Region** not working issue on the [TiDB Cloud Pricing](https://en.pingcap.com/products/tidbcloud/pricing/) page +* Fix the **Select Region** not working issue on the [TiDB Cloud Pricing](https://www.pingcap.com/pricing/) page ## June 24, 2021 diff --git a/tidb-cloud/release-notes-2022.md b/tidb-cloud/release-notes-2022.md index 816f4625a4daf..ea9031da137a9 100644 --- a/tidb-cloud/release-notes-2022.md +++ b/tidb-cloud/release-notes-2022.md @@ -32,7 +32,7 @@ This page lists the release notes of [TiDB Cloud](https://en.pingcap.com/tidb-cl * Provide a new option for TiKV node size: `8 vCPU, 32 GiB`. You can choose either `8 vCPU, 32 GiB` or `8 vCPU, 64 GiB` for an 8 vCPU TiKV node. * Support syntax highlighting in sample code provided in the **Connect to TiDB** dialog to improve code readability. You can easily identify the parameters that you need to replace in the sample code. * Support automatically validating whether TiDB Cloud can access your source data after you confirm the import task on the **Data Import Task** page. -* Change the theme color of the TiDB Cloud console to make it consistent with that of [PingCAP website](https://en.pingcap.com/). +* Change the theme color of the TiDB Cloud console to make it consistent with that of [PingCAP website](https://www.pingcap.com/). ## July 12, 2022 @@ -86,7 +86,7 @@ This page lists the release notes of [TiDB Cloud](https://en.pingcap.com/tidb-cl ## June 7, 2022 * Add the [Try Free](https://tidbcloud.com/free-trial) registration page to quickly sign up for TiDB Cloud. -* Remove the **Proof of Concept plan** option from the plan selection page. If you want to apply for a 14-day PoC trial for free, go to the [Apply for PoC](https://en.pingcap.com/apply-for-poc/) page. For more information, see [Perform a Proof of Concept (PoC) with TiDB Cloud](/tidb-cloud/tidb-cloud-poc.md). +* Remove the **Proof of Concept plan** option from the plan selection page. If you want to apply for a 14-day PoC trial for free, contact us. For more information, see [Perform a Proof of Concept (PoC) with TiDB Cloud](/tidb-cloud/tidb-cloud-poc.md). * Improve the system security by prompting users who sign up for TiDB Cloud with emails and passwords to reset their passwords every 90 days. ## May 24, 2022 @@ -141,7 +141,7 @@ General changes: * Introduce a new public region: `eu-central-1`. * Deprecate 8 vCPU TiFlash and provide 16 vCPU TiFlash. * Separate the price of CPU and storage (both have 30% public preview discount). -* Update the [billing information](/tidb-cloud/tidb-cloud-billing.md) and the [price table](https://en.pingcap.com/tidb-cloud/#pricing). +* Update the [billing information](/tidb-cloud/tidb-cloud-billing.md) and the [price table](https://www.pingcap.com/pricing/). New features: diff --git a/tidb-cloud/tidb-cloud-faq.md b/tidb-cloud/tidb-cloud-faq.md index 1af33bfe5dc6f..63f2a732f823a 100644 --- a/tidb-cloud/tidb-cloud-faq.md +++ b/tidb-cloud/tidb-cloud-faq.md @@ -83,7 +83,7 @@ Traditionally, there are two types of databases: Online Transactional Processing As a Hybrid Transactional Analytical Processing (HTAP) database, TiDB Cloud helps you simplify your system architecture, reduce maintenance complexity, and support real-time analytics on transactional data by automatically replicating data reliably between the OLTP (TiKV) store and OLAP (TiFlash) store. Typical HTAP use cases are user personalization, AI recommendation, fraud detection, business intelligence, and real-time reporting. -For further HTAP scenarios, refer to [How We Build an HTAP Database That Simplifies Your Data Platform](https://pingcap.com/blog/how-we-build-an-htap-database-that-simplifies-your-data-platform). +For further HTAP scenarios, refer to [How We Build an HTAP Database That Simplifies Your Data Platform](https://www.pingcap.com/blog/how-we-build-an-htap-database-that-simplifies-your-data-platform/). ## Is there an easy migration path from another RDBMS to TiDB Cloud? diff --git a/tiflash/tiflash-overview.md b/tiflash/tiflash-overview.md index ac2bb02ce0317..045c9974d8115 100644 --- a/tiflash/tiflash-overview.md +++ b/tiflash/tiflash-overview.md @@ -15,7 +15,7 @@ In TiFlash, the columnar replicas are asynchronously replicated according to the The above figure is the architecture of TiDB in its HTAP form, including TiFlash nodes. -TiFlash provides the columnar storage, with a layer of coprocessors efficiently implemented by ClickHouse. Similar to TiKV, TiFlash also has a Multi-Raft system, which supports replicating and distributing data in the unit of Region (see [Data Storage](https://en.pingcap.com/blog/tidb-internal-data-storage/) for details). +TiFlash provides the columnar storage, with a layer of coprocessors efficiently implemented by ClickHouse. Similar to TiKV, TiFlash also has a Multi-Raft system, which supports replicating and distributing data in the unit of Region (see [Data Storage](https://www.pingcap.com/blog/tidb-internal-data-storage/) for details). TiFlash conducts real-time replication of data in the TiKV nodes at a low cost that does not block writes in TiKV. Meanwhile, it provides the same read consistency as in TiKV and ensures that the latest data is read. The Region replica in TiFlash is logically identical to those in TiKV, and is split and merged along with the Leader replica in TiKV at the same time.