diff --git a/en/cheat-sheet.md b/en/cheat-sheet.md index db3b8eda6f..0e871e5e49 100644 --- a/en/cheat-sheet.md +++ b/en/cheat-sheet.md @@ -485,7 +485,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm inspect values pingcap/tidb-operator --version=v1.1.9 > values-tidb-operator.yaml +helm inspect values pingcap/tidb-operator --version=v1.1.10 > values-tidb-operator.yaml ``` ### Deploy using Helm chart @@ -501,7 +501,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.9 -f values-tidb-operator.yaml +helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.10 -f values-tidb-operator.yaml ``` ### View the deployed Helm release @@ -525,7 +525,7 @@ For example: {{< copyable "shell-regular" >}} ```shell -helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.9 -f values-tidb-operator.yaml +helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.10 -f values-tidb-operator.yaml ``` ### Delete Helm release diff --git a/en/configure-storage-class.md b/en/configure-storage-class.md index 0a3f3e702f..0e7cf7d886 100644 --- a/en/configure-storage-class.md +++ b/en/configure-storage-class.md @@ -77,7 +77,7 @@ The following process uses `/mnt/disks` as the discovery directory and `local-st {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/local-dind/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/local-dind/local-volume-provisioner.yaml ``` If the server has no access to the Internet, download the `local-volume-provisioner.yaml` file on a machine with Internet access and then install it. @@ -85,7 +85,7 @@ The following process uses `/mnt/disks` as the discovery directory and `local-st {{< copyable "shell-regular" >}} ```shell - wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/local-dind/local-volume-provisioner.yaml && + wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/local-dind/local-volume-provisioner.yaml && kubectl apply -f ./local-volume-provisioner.yaml ``` @@ -254,7 +254,7 @@ Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner {{< copyable "shell-regular" >}} ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/local-dind/local-volume-provisioner.yaml +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/local-dind/local-volume-provisioner.yaml ``` When you later deploy tidb clusters, deploy TiDB Binlog for incremental backups, or do full backups, configure the corresponding `StorageClass` for use. diff --git a/en/deploy-on-alibaba-cloud.md b/en/deploy-on-alibaba-cloud.md index 36e47ff84c..c0ebaed6ca 100644 --- a/en/deploy-on-alibaba-cloud.md +++ b/en/deploy-on-alibaba-cloud.md @@ -89,7 +89,7 @@ All the instances except ACK mandatory workers are deployed across availability tikv_count = 3 tidb_count = 2 pd_count = 3 - operator_version = "v1.1.9" + operator_version = "v1.1.10" ``` * To deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`. diff --git a/en/deploy-tidb-from-kubernetes-gke.md b/en/deploy-tidb-from-kubernetes-gke.md index d90c05950a..c68849533d 100644 --- a/en/deploy-tidb-from-kubernetes-gke.md +++ b/en/deploy-tidb-from-kubernetes-gke.md @@ -97,7 +97,7 @@ If you see `Ready` for all nodes, congratulations! You've set up your first Kube TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` CRD. ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml && \ +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml && \ kubectl get crd tidbclusters.pingcap.com ``` @@ -105,7 +105,7 @@ After the `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes ```shell kubectl create namespace tidb-admin -helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.9 +helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.10 kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator ``` diff --git a/en/deploy-tidb-operator.md b/en/deploy-tidb-operator.md index 40cb46b432..36ab966e43 100644 --- a/en/deploy-tidb-operator.md +++ b/en/deploy-tidb-operator.md @@ -49,7 +49,7 @@ TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs {{< copyable "shell-regular" >}} ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml ``` If the server cannot access the Internet, you need to download the `crd.yaml` file on a machine with Internet access before installing: @@ -57,7 +57,7 @@ If the server cannot access the Internet, you need to download the `crd.yaml` fi {{< copyable "shell-regular" >}} ```shell -wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml +wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml kubectl apply -f ./crd.yaml ``` @@ -99,7 +99,7 @@ After creating CRDs in the step above, there are two methods to deploy TiDB Oper > **Note:** > - > `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.1.9`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command. + > `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.1.10`. You can view the currently supported versions by running the `helm search repo -l tidb-operator` command. 2. Configure TiDB Operator @@ -139,15 +139,15 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.1.9.tgz + wget http://charts.pingcap.org/tidb-operator-v1.1.10.tgz ``` - Copy the `tidb-operator-v1.1.9.tgz` file to the target server and extract it to the current directory: + Copy the `tidb-operator-v1.1.10.tgz` file to the target server and extract it to the current directory: {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator.v1.1.9.tgz + tar zxvf tidb-operator.v1.1.10.tgz ``` 2. Download the Docker images used by TiDB Operator @@ -159,8 +159,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - pingcap/tidb-operator:v1.1.9 - pingcap/tidb-backup-manager:v1.1.9 + pingcap/tidb-operator:v1.1.10 + pingcap/tidb-backup-manager:v1.1.10 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9 @@ -173,13 +173,13 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.1.9 - docker pull pingcap/tidb-backup-manager:v1.1.9 + docker pull pingcap/tidb-operator:v1.1.10 + docker pull pingcap/tidb-backup-manager:v1.1.10 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 - docker save -o tidb-operator-v1.1.9.tar pingcap/tidb-operator:v1.1.9 - docker save -o tidb-backup-manager-v1.1.9.tar pingcap/tidb-backup-manager:v1.1.9 + docker save -o tidb-operator-v1.1.10.tar pingcap/tidb-operator:v1.1.10 + docker save -o tidb-backup-manager-v1.1.10.tar pingcap/tidb-backup-manager:v1.1.10 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3 ``` @@ -189,8 +189,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.1.9.tar - docker load -i tidb-backup-manager-v1.1.9.tar + docker load -i tidb-operator-v1.1.10.tar + docker load -i tidb-backup-manager-v1.1.10.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tar ``` diff --git a/en/get-started.md b/en/get-started.md index 4f806fbcd7..1cee045fec 100644 --- a/en/get-started.md +++ b/en/get-started.md @@ -244,7 +244,7 @@ Execute this command to install the CRDs into your cluster: {{< copyable "shell-regular" >}} ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml ``` Expected output: @@ -296,7 +296,7 @@ This section describes how to install TiDB Operator using Helm 3. {{< copyable "shell-regular" >}} ```shell - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.9 + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.10 ``` If you have trouble accessing Docker Hub, you can try images hosted in Alibaba Cloud: @@ -304,9 +304,9 @@ This section describes how to install TiDB Operator using Helm 3. {{< copyable "shell-regular" >}} ``` - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.9 \ - --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.9 \ - --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.9 \ + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.10 \ + --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.10 \ + --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.10 \ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler ``` diff --git a/en/tidb-toolkit.md b/en/tidb-toolkit.md index c4159162ff..f5fe1c3622 100644 --- a/en/tidb-toolkit.md +++ b/en/tidb-toolkit.md @@ -201,12 +201,12 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.1.9 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.1.9 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.1.9 A Helm chart for TiDB Binlog drainer. -pingcap/tidb-lightning v1.1.9 A Helm chart for TiDB Lightning -pingcap/tidb-operator v1.1.9 v1.1.9 tidb-operator Helm chart for Kubernetes -pingcap/tikv-importer v1.1.9 A Helm chart for TiKV Importer +pingcap/tidb-backup v1.1.10 A Helm chart for TiDB Backup or Restore +pingcap/tidb-cluster v1.1.10 A Helm chart for TiDB Cluster +pingcap/tidb-drainer v1.1.10 A Helm chart for TiDB Binlog drainer. +pingcap/tidb-lightning v1.1.10 A Helm chart for TiDB Lightning +pingcap/tidb-operator v1.1.10 v1.1.10 tidb-operator Helm chart for Kubernetes +pingcap/tikv-importer v1.1.10 A Helm chart for TiKV Importer ``` When a new version of chart has been released, you can use `helm repo update` to update the repository cached locally: @@ -268,9 +268,9 @@ Use the following command to download the chart file required for cluster instal {{< copyable "shell-regular" >}} ```shell -wget http://charts.pingcap.org/tidb-operator-v1.1.9.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.1.9.tgz -wget http://charts.pingcap.org/tidb-lightning-v1.1.9.tgz +wget http://charts.pingcap.org/tidb-operator-v1.1.10.tgz +wget http://charts.pingcap.org/tidb-drainer-v1.1.10.tgz +wget http://charts.pingcap.org/tidb-lightning-v1.1.10.tgz ``` Copy these chart files to the server and decompress them. You can use these charts to install the corresponding components by running the `helm install` command. Take `tidb-operator` as an example: @@ -278,7 +278,7 @@ Copy these chart files to the server and decompress them. You can use these char {{< copyable "shell-regular" >}} ```shell -tar zxvf tidb-operator.v1.1.9.tgz +tar zxvf tidb-operator.v1.1.10.tgz helm install ${release_name} ./tidb-operator --namespace=${namespace} ``` diff --git a/en/upgrade-tidb-operator.md b/en/upgrade-tidb-operator.md index 252df4ed11..5fe41a9440 100644 --- a/en/upgrade-tidb-operator.md +++ b/en/upgrade-tidb-operator.md @@ -21,7 +21,7 @@ This document describes how to upgrade TiDB Operator and Kubernetes. > **Note:** > - > The `${version}` in this document represents the version of TiDB Operator, such as `v1.1.9`. You can check the currently supported version using the `helm search repo -l tidb-operator` command. + > The `${version}` in this document represents the version of TiDB Operator, such as `v1.1.10`. You can check the currently supported version using the `helm search repo -l tidb-operator` command. > If the command output does not include the latest version, update the repo using the `helm repo update` command. For details, refer to [Configure the Help repo](tidb-toolkit.md#configure-the-helm-repo). 2. Get the `values.yaml` file of the `tidb-operator` chart that you want to install: diff --git a/zh/cheat-sheet.md b/zh/cheat-sheet.md index 0028e6dc37..47678f12ab 100644 --- a/zh/cheat-sheet.md +++ b/zh/cheat-sheet.md @@ -485,7 +485,7 @@ helm inspect values ${chart_name} --version=${chart_version} > values.yaml {{< copyable "shell-regular" >}} ```shell -helm inspect values pingcap/tidb-operator --version=v1.1.9 > values-tidb-operator.yaml +helm inspect values pingcap/tidb-operator --version=v1.1.10 > values-tidb-operator.yaml ``` ### 使用 Helm Chart 部署 @@ -501,7 +501,7 @@ helm install ${name} ${chart_name} --namespace=${namespace} --version=${chart_ve {{< copyable "shell-regular" >}} ```shell -helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.9 -f values-tidb-operator.yaml +helm install tidb-operator pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.10 -f values-tidb-operator.yaml ``` ### 查看已经部署的 Helm Release @@ -525,7 +525,7 @@ helm upgrade ${name} ${chart_name} --version=${chart_version} -f ${values_file} {{< copyable "shell-regular" >}} ```shell -helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.9 -f values-tidb-operator.yaml +helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.10 -f values-tidb-operator.yaml ``` ### 删除 Helm Release diff --git a/zh/configure-storage-class.md b/zh/configure-storage-class.md index 38105b90d7..3b0ffb19d1 100644 --- a/zh/configure-storage-class.md +++ b/zh/configure-storage-class.md @@ -77,7 +77,7 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro {{< copyable "shell-regular" >}} ```shell - kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/local-dind/local-volume-provisioner.yaml + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/local-dind/local-volume-provisioner.yaml ``` 如果服务器没有外网,需要先用有外网的机器下载 `local-volume-provisioner.yaml` 文件,然后再进行安装: @@ -85,7 +85,7 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro {{< copyable "shell-regular" >}} ```shell - wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/local-dind/local-volume-provisioner.yaml + wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/local-dind/local-volume-provisioner.yaml kubectl apply -f ./local-volume-provisioner.yaml ``` @@ -254,7 +254,7 @@ data: {{< copyable "shell-regular" >}} ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/local-dind/local-volume-provisioner.yaml +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/local-dind/local-volume-provisioner.yaml ``` 后续创建 TiDB 集群或备份等组件的时候,再配置相应的 `StorageClass` 供其使用。 diff --git a/zh/deploy-on-alibaba-cloud.md b/zh/deploy-on-alibaba-cloud.md index 43cf6428d5..2486489da8 100644 --- a/zh/deploy-on-alibaba-cloud.md +++ b/zh/deploy-on-alibaba-cloud.md @@ -89,7 +89,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/stable/deploy-on-alibaba-cloud/','/docs-c tikv_count = 3 tidb_count = 2 pd_count = 3 - operator_version = "v1.1.9" + operator_version = "v1.1.10" ``` 如果需要在集群中部署 TiFlash,需要在 `terraform.tfvars` 中设置 `create_tiflash_node_pool = true`,也可以设置 `tiflash_count` 和 `tiflash_instance_type` 来配置 TiFlash 节点池的节点数量和实例类型,`tiflash_count` 默认为 `2`,`tiflash_instance_type` 默认为 `ecs.i2.2xlarge`。 diff --git a/zh/deploy-on-aws-eks.md b/zh/deploy-on-aws-eks.md index b6a48d99e9..dc1d4c8f08 100644 --- a/zh/deploy-on-aws-eks.md +++ b/zh/deploy-on-aws-eks.md @@ -174,8 +174,8 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/examples/aws/tidb-cluster.yaml && -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/examples/aws/tidb-monitor.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/examples/aws/tidb-cluster.yaml && +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/examples/aws/tidb-monitor.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) diff --git a/zh/deploy-on-gcp-gke.md b/zh/deploy-on-gcp-gke.md index 68c88eca35..605ce6cf3b 100644 --- a/zh/deploy-on-gcp-gke.md +++ b/zh/deploy-on-gcp-gke.md @@ -94,8 +94,8 @@ kubectl create namespace tidb-cluster {{< copyable "shell-regular" >}} ```shell -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/examples/gcp/tidb-cluster.yaml && -curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/examples/gcp/tidb-monitor.yaml +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/examples/gcp/tidb-cluster.yaml && +curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/examples/gcp/tidb-monitor.yaml ``` 如需了解更详细的配置信息或者进行自定义配置,请参考[配置 TiDB 集群](configure-a-tidb-cluster.md) diff --git a/zh/deploy-tidb-from-kubernetes-gke.md b/zh/deploy-tidb-from-kubernetes-gke.md index 69fd0adaac..4651672c13 100644 --- a/zh/deploy-tidb-from-kubernetes-gke.md +++ b/zh/deploy-tidb-from-kubernetes-gke.md @@ -94,7 +94,7 @@ kubectl get nodes TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) 扩展 Kubernetes,所以要使用 TiDB Operator,必须先创建 `TidbCluster` 等各种自定义资源类型: ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml && \ +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml && \ kubectl get crd tidbclusters.pingcap.com ``` @@ -102,7 +102,7 @@ kubectl get crd tidbclusters.pingcap.com ```shell kubectl create namespace tidb-admin -helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.9 +helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.10 kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator ``` diff --git a/zh/deploy-tidb-operator.md b/zh/deploy-tidb-operator.md index a18d96ef8e..609ee048bb 100644 --- a/zh/deploy-tidb-operator.md +++ b/zh/deploy-tidb-operator.md @@ -49,7 +49,7 @@ TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/do {{< copyable "shell-regular" >}} ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml ``` 如果服务器没有外网,需要先用有外网的机器下载 `crd.yaml` 文件,然后再进行安装: @@ -57,7 +57,7 @@ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/ {{< copyable "shell-regular" >}} ```shell -wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml +wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml kubectl apply -f ./crd.yaml ``` @@ -99,7 +99,7 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z > **注意:** > - > `${chart_version}` 在后续文档中代表 chart 版本,例如 `v1.1.9`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 + > `${chart_version}` 在后续文档中代表 chart 版本,例如 `v1.1.10`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 2. 配置 TiDB Operator @@ -139,15 +139,15 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - wget http://charts.pingcap.org/tidb-operator-v1.1.9.tgz + wget http://charts.pingcap.org/tidb-operator-v1.1.10.tgz ``` - 将 `tidb-operator-v1.1.9.tgz` 文件拷贝到服务器上并解压到当前目录: + 将 `tidb-operator-v1.1.10.tgz` 文件拷贝到服务器上并解压到当前目录: {{< copyable "shell-regular" >}} ```shell - tar zxvf tidb-operator.v1.1.9.tgz + tar zxvf tidb-operator.v1.1.10.tgz ``` 2. 下载 TiDB Operator 运行所需的 Docker 镜像 @@ -159,8 +159,8 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - pingcap/tidb-operator:v1.1.9 - pingcap/tidb-backup-manager:v1.1.9 + pingcap/tidb-operator:v1.1.10 + pingcap/tidb-backup-manager:v1.1.10 bitnami/kubectl:latest pingcap/advanced-statefulset:v0.3.3 k8s.gcr.io/kube-scheduler:v1.16.9 @@ -173,13 +173,13 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - docker pull pingcap/tidb-operator:v1.1.9 - docker pull pingcap/tidb-backup-manager:v1.1.9 + docker pull pingcap/tidb-operator:v1.1.10 + docker pull pingcap/tidb-backup-manager:v1.1.10 docker pull bitnami/kubectl:latest docker pull pingcap/advanced-statefulset:v0.3.3 - docker save -o tidb-operator-v1.1.9.tar pingcap/tidb-operator:v1.1.9 - docker save -o tidb-backup-manager-v1.1.9.tar pingcap/tidb-backup-manager:v1.1.9 + docker save -o tidb-operator-v1.1.10.tar pingcap/tidb-operator:v1.1.10 + docker save -o tidb-backup-manager-v1.1.10.tar pingcap/tidb-backup-manager:v1.1.10 docker save -o bitnami-kubectl.tar bitnami/kubectl:latest docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3 ``` @@ -189,8 +189,8 @@ tidbmonitors.pingcap.com 2020-06-11T07:59:41Z {{< copyable "shell-regular" >}} ```shell - docker load -i tidb-operator-v1.1.9.tar - docker load -i tidb-backup-manager-v1.1.9.tar + docker load -i tidb-operator-v1.1.10.tar + docker load -i tidb-backup-manager-v1.1.10.tar docker load -i bitnami-kubectl.tar docker load -i advanced-statefulset-v0.3.3.tar ``` diff --git a/zh/get-started.md b/zh/get-started.md index fa102ee765..cac1d72049 100644 --- a/zh/get-started.md +++ b/zh/get-started.md @@ -239,7 +239,7 @@ TiDB Operator 包含许多实现 TiDB 集群不同组件的自定义资源类型 {{< copyable "shell-regular" >}} ```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.9/manifests/crd.yaml +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.10/manifests/crd.yaml ``` 期望输出: @@ -291,7 +291,7 @@ TiDB Operator 使用 Helm 3 安装。 {{< copyable "shell-regular" >}} ```shell - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.9 + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.10 ``` 如果访问 Docker Hub 网速较慢,可以使用阿里云上的镜像: @@ -299,9 +299,9 @@ TiDB Operator 使用 Helm 3 安装。 {{< copyable "shell-regular" >}} ``` - helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.9 \ - --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.9 \ - --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.9 \ + helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.10 \ + --set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.10 \ + --set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.10 \ --set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler ``` diff --git a/zh/tidb-toolkit.md b/zh/tidb-toolkit.md index 46ce74d8f9..f825c4e606 100644 --- a/zh/tidb-toolkit.md +++ b/zh/tidb-toolkit.md @@ -201,12 +201,12 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.1.9 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.1.9 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.1.9 A Helm chart for TiDB Binlog drainer. -pingcap/tidb-lightning v1.1.9 A Helm chart for TiDB Lightning -pingcap/tidb-operator v1.1.9 v1.1.9 tidb-operator Helm chart for Kubernetes -pingcap/tikv-importer v1.1.9 A Helm chart for TiKV Importer +pingcap/tidb-backup v1.1.10 A Helm chart for TiDB Backup or Restore +pingcap/tidb-cluster v1.1.10 A Helm chart for TiDB Cluster +pingcap/tidb-drainer v1.1.10 A Helm chart for TiDB Binlog drainer. +pingcap/tidb-lightning v1.1.10 A Helm chart for TiDB Lightning +pingcap/tidb-operator v1.1.10 v1.1.10 tidb-operator Helm chart for Kubernetes +pingcap/tikv-importer v1.1.10 A Helm chart for TiKV Importer ``` 当新版本的 chart 发布后,你可以使用 `helm repo update` 命令更新本地对于仓库的缓存: @@ -266,9 +266,9 @@ helm uninstall ${release_name} {{< copyable "shell-regular" >}} ```shell -wget http://charts.pingcap.org/tidb-operator-v1.1.9.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.1.9.tgz -wget http://charts.pingcap.org/tidb-lightning-v1.1.9.tgz +wget http://charts.pingcap.org/tidb-operator-v1.1.10.tgz +wget http://charts.pingcap.org/tidb-drainer-v1.1.10.tgz +wget http://charts.pingcap.org/tidb-lightning-v1.1.10.tgz ``` 将这些 chart 文件拷贝到服务器上并解压,可以通过 `helm install` 命令使用这些 chart 来安装相应组件,以 `tidb-operator` 为例: @@ -276,7 +276,7 @@ wget http://charts.pingcap.org/tidb-lightning-v1.1.9.tgz {{< copyable "shell-regular" >}} ```shell -tar zxvf tidb-operator.v1.1.9.tgz +tar zxvf tidb-operator.v1.1.10.tgz helm install ${release_name} ./tidb-operator --namespace=${namespace} ``` diff --git a/zh/upgrade-tidb-operator.md b/zh/upgrade-tidb-operator.md index c02710d4de..055a5ab0f5 100644 --- a/zh/upgrade-tidb-operator.md +++ b/zh/upgrade-tidb-operator.md @@ -21,7 +21,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/stable/upgrade-tidb-operator/','/docs-cn/ > **注意:** > - > `${version}` 在后续文档中代表 TiDB Operator 版本,例如 `v1.1.9`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 + > `${version}` 在后续文档中代表 TiDB Operator 版本,例如 `v1.1.10`,可以通过 `helm search repo -l tidb-operator` 查看当前支持的版本。 > 如果未包含最新版本,可以通过 `helm repo update` 更新 repo。详情请参考[配置 Helm repo](tidb-toolkit.md#配置-helm-repo) )。 2. 获取你要安装的 `tidb-operator` chart 中的 `values.yaml` 文件: