diff --git a/TOC-tidb-cloud.md b/TOC-tidb-cloud.md index 06e700f46f808..2349ebfb8aacb 100644 --- a/TOC-tidb-cloud.md +++ b/TOC-tidb-cloud.md @@ -232,7 +232,8 @@ - [Import Apache Parquet Files from Amazon S3 or GCS](/tidb-cloud/import-parquet-files.md) - [Import with MySQL CLI](/tidb-cloud/import-with-mysql-cli.md) - Reference - - [Configure Amazon S3 Access and GCS Access](/tidb-cloud/config-s3-and-gcs-access.md) + - [Configure External Storage Access for TiDB Dedicated](/tidb-cloud/config-s3-and-gcs-access.md) + - [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md) - [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md) - [CSV Configurations for Importing Data](/tidb-cloud/csv-config-for-import-data.md) - [Troubleshoot Access Denied Errors during Data Import from Amazon S3](/tidb-cloud/troubleshoot-import-access-denied-error.md) diff --git a/media/tidb-cloud/serverless-external-storage/azure-sas-create.png b/media/tidb-cloud/serverless-external-storage/azure-sas-create.png new file mode 100644 index 0000000000000..f8c50e42359d9 Binary files /dev/null and b/media/tidb-cloud/serverless-external-storage/azure-sas-create.png differ diff --git a/media/tidb-cloud/serverless-external-storage/azure-sas-position.png b/media/tidb-cloud/serverless-external-storage/azure-sas-position.png new file mode 100644 index 0000000000000..01fbb2edaf7b0 Binary files /dev/null and b/media/tidb-cloud/serverless-external-storage/azure-sas-position.png differ diff --git a/media/tidb-cloud/serverless-external-storage/gcs-service-account-key.png b/media/tidb-cloud/serverless-external-storage/gcs-service-account-key.png new file mode 100644 index 0000000000000..655c502fbab4b Binary files /dev/null and b/media/tidb-cloud/serverless-external-storage/gcs-service-account-key.png differ diff --git a/media/tidb-cloud/serverless-external-storage/gcs-service-account.png b/media/tidb-cloud/serverless-external-storage/gcs-service-account.png new file mode 100644 index 0000000000000..7ab07dc7c8885 Binary files /dev/null and b/media/tidb-cloud/serverless-external-storage/gcs-service-account.png differ diff --git a/media/tidb-cloud/serverless-external-storage/serverless-role-arn.png b/media/tidb-cloud/serverless-external-storage/serverless-role-arn.png new file mode 100644 index 0000000000000..509f1bf06940d Binary files /dev/null and b/media/tidb-cloud/serverless-external-storage/serverless-role-arn.png differ diff --git a/tidb-cloud/config-s3-and-gcs-access.md b/tidb-cloud/config-s3-and-gcs-access.md index 784df3f8ce80d..686ae00442019 100644 --- a/tidb-cloud/config-s3-and-gcs-access.md +++ b/tidb-cloud/config-s3-and-gcs-access.md @@ -1,11 +1,13 @@ --- -title: Configure Amazon S3 Access and GCS Access +title: Configure External Storage Access for TiDB Dedicated summary: Learn how to configure Amazon Simple Storage Service (Amazon S3) access and Google Cloud Storage (GCS) access. --- -# Configure Amazon S3 Access and GCS Access +# Configure External Storage Access for TiDB Dedicated -If your source data is stored in Amazon S3 or Google Cloud Storage (GCS) buckets, before importing or migrating the data to TiDB Cloud, you need to configure cross-account access to the buckets. This document describes how to do this. +If your source data is stored in Amazon S3 or Google Cloud Storage (GCS) buckets, before importing or migrating the data to TiDB Cloud, you need to configure cross-account access to the buckets. This document describes how to do this for TiDB Dedicated clusters. + +If you need to configure these external storages for TiDB Serverless clusters, see [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md). ## Configure Amazon S3 access @@ -98,9 +100,9 @@ Configure the bucket access for TiDB Cloud and get the Role ARN as follows: If the objects in your bucket have been copied from another encrypted bucket, the KMS key value needs to include the keys of both buckets. For example, `"Resource": ["arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f","arn:aws:kms:ap-northeast-1:495580073302:key/0d7926a7-6ecc-4bf7-a9c1-a38f0faec0cd"]`. - 6. Click **Next: Tags**, add a tag of the policy (optional), and then click **Next:Review**. - - 7. Set a policy name, and then click **Create policy**. + 6. Click **Next**. + + 7. Set a policy name, add a tag of the policy (optional), and then click **Create policy**. 3. In the AWS Management Console, create an access role for TiDB Cloud and get the role ARN. diff --git a/tidb-cloud/serverless-export.md b/tidb-cloud/serverless-export.md index 199fd7c93c0d4..22f54cf0226df 100644 --- a/tidb-cloud/serverless-export.md +++ b/tidb-cloud/serverless-export.md @@ -5,7 +5,7 @@ summary: Learn how to export data from TiDB Serverless clusters. # Export Data from TiDB Serverless -TiDB Serverless Export (Beta) is a service that enables you to export data from a TiDB Serverless cluster to local storage or an external storage service. You can use the exported data for backup, migration, data analysis, or other purposes. +TiDB Serverless Export (Beta) is a service that enables you to export data from a TiDB Serverless cluster to a local file or an external storage service. You can use the exported data for backup, migration, data analysis, or other purposes. While you can also export data using tools such as [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) and TiDB [Dumpling](https://docs.pingcap.com/tidb/dev/dumpling-overview), TiDB Serverless Export offers a more convenient and efficient way to export data from a TiDB Serverless cluster. It brings the following benefits: @@ -13,116 +13,265 @@ While you can also export data using tools such as [mysqldump](https://dev.mysql - Isolation: the export service uses separate computing resources, ensuring isolation from the resources used by your online services. - Consistency: the export service ensures the consistency of the exported data without causing locks, which does not affect your online services. -## Features +## Export locations -This section describes the features of TiDB Serverless Export. +You can export data to: -### Export location - -You can export data to local storage or [Amazon S3](https://aws.amazon.com/s3/). +- A local file +- An external storage, including: + - [Amazon S3](https://aws.amazon.com/s3/) + - [Google Cloud Storage](https://cloud.google.com/storage) + - [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/) > **Note:** > -> If the size of the data to be exported is large (more than 100 GiB), it is recommended that you export it to Amazon S3. +> If the size of the data to be exported is large (more than 100 GiB), it is recommended that you export it to an external storage. + +### A local file -**Local storage** +To export data from a TiDB Serverless cluster to a local file, you need to export data [using the TiDB Cloud console](#export-data-to-a-local-file) or [using the TiDB Cloud CLI](/tidb-cloud/ticloud-serverless-export-create.md), and then download the exported data using the TiDB Cloud CLI. -Exporting data to local storage has the following limitations: +Exporting data to a local file has the following limitations: -- Exporting multiple databases to local storage at the same time is not supported. +- Downloading exported data using the TiDB Cloud console is not supported. - Exported data is saved in the stashing area and will expire after two days. You need to download the exported data in time. -- If the storage space of stashing area is full, you will not be able to export data to local storage. +- If the storage space of stashing area of TiDB Cloud is full, you will not be able to export data to the local file. -**Amazon S3** +### Amazon S3 -To export data to Amazon S3, you need to provide an [access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for your S3 bucket. Make sure the access key has read and write access for your S3 bucket, including at least these permissions: `s3:PutObject` and `s3:ListBucket`. +To export data to Amazon S3, you need to provide the following information: -### Data filtering +- URI: `s3:///` +- One of the following access credentials: + - [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` and `s3:ListBucket` permissions. + - [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN has the `s3:PutObject` and `s3:ListBucket` permissions. + +For more information, see [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md#configure-amazon-s3-access). + +### Google Cloud Storage + +To export data to Google Cloud Storage, you need to provide the following information: + +- URI: `gs:///` +- Access credential: a **base64 encoded** [service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for your bucket. Make sure the service account key has the `storage.objects.create` permission. + +For more information, see [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md#configure-gcs-access). + +> **Note:** +> +> Currently, you can only export to Google Cloud Storage using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). + +### Azure Blob Storage + +To export data to Azure Blob Storage, you need to provide the following information: + +- URI: `https://.blob.core.windows.net//` +- Access credential: a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview) for your Azure Blob Storage container. Make sure the SAS token has the `Read` and `Write` permissions on the `Container` and `Object` resources. -You can filter data by specifying the database and table you want to export. If you specify a database without specifying a table, all tables in that specified database will be exported. If you do not specify a database when you export data to Amazon S3, all databases in the cluster will be exported. +For more information, see [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md#configure-azure-blob-storage-access). > **Note:** > -> You must specify the database when you export data to local storage. +> Currently, you can only export to Azure Blob Storage using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). + +## Export options + +### Data filtering + +- TiDB Cloud console supports exporting data with the selected databases and tables. +- TiDB Cloud CLI supports exporting data with SQL statements and [table filters](/table-filter.md). ### Data formats You can export data in the following formats: -- `SQL` (default): export data in SQL format. -- `CSV`: export data in CSV format. +- `SQL`: export data in SQL format. +- `CSV`: export data in CSV format. You can specify the following options: + - `delimiter`: specify the delimiter used in the exported data. The default delimiter is `"`. + - `separator`: specify the character used to separate fields in the exported data. The default separator is `,`. + - `header`: specify whether to include a header row in the exported data. The default value is `true`. + - `null-value`: specify the string that represents a NULL value in the exported data. The default value is `\N`. +- `Parquet`: export data in Parquet format. Currently it is only supported in TiDB Cloud CLI. The schema and data are exported according to the following naming conventions: -| Item | Not compressed | Compressed | -|-----------------|------------------------------------------|-----------------------------------------------------| -| Database schema | {database}-schema-create.sql | {database}-schema-create.sql.{compression-type} | -| Table schema | {database}.{table}-schema.sql | {database}.{table}-schema.sql.{compression-type} | -| Data | {database}.{table}.{0001}.{sql|csv} | {database}.{table}.{0001}.{sql|csv}.{compression-type} | +| Item | Not compressed | Compressed | +|-----------------|-------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------| +| Database schema | {database}-schema-create.sql | {database}-schema-create.sql.{compression-type} | +| Table schema | {database}.{table}-schema.sql | {database}.{table}-schema.sql.{compression-type} | +| Data | {database}.{table}.{0001}.{csv|parquet|sql} | {database}.{table}.{0001}.{csv|sql}.{compression-type}
{database}.{table}.{0001}.{compression-type}.parquet | ### Data compression -You can compress the exported data using the following algorithms: +You can compress the exported CSV and SQL data using the following algorithms: + +- `gzip` (default): compress the exported data with `gzip`. +- `snappy`: compress the exported data with `snappy`. +- `zstd`: compress the exported data with `zstd`. +- `none`: do not compress the exported `data`. + +You can compress the exported Parquet data using the following algorithms: + +- `zstd` (default): compress the Parquet file with `zstd`. +- `gzip`: compress the Parquet file with `gzip`. +- `snappy`: compress the Parquet file with `snappy`. +- `none`: do not compress the Parquet file. + +## Steps + +### Export data to a local file + + +
+ +1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + +2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. -- `gzip` (default): compress the exported data with gzip. -- `snappy`: compress the exported data with snappy. -- `zstd`: compress the exported data with zstd. -- `none`: do not compress the exported data. +3. On the **Import** page, click **Export Data to** in the upper-right corner, then choose **Local File** from the drop-down list. Fill in the following parameters: -### Cancel export + - **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`. + - **Exported Data**: choose the databases and tables you want to export. + - **Data Format**: choose **SQL File** or **CSV**. + - **Compression**: choose **Gzip**, **Snappy**, **Zstd**, or **None**. -You can cancel an export task that is in the running state. + > **Tip:** + > + > If your cluster has neither imported nor exported any data before, you need to click **Click here to export data to...** at the bottom of the page to export data. + +4. Click **Export**. -## Examples +5. After the export task is successful, you can copy the download command displayed in the export task detail, and then download the exported data by running the command in the [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). -Currently, you can manage export tasks using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). +
-### Export data to local storage +
-1. Create an export task that specifies the database and table you want to export: +1. Create an export task. - ```shell - ticloud serverless export create -c --database --table - ``` + ```shell + ticloud serverless export create -c --filter "database.table" + ``` You will get an export ID from the output. -2. After the export is successful, download the exported data to your local storage: +2. After the export task is successful, download the exported data to your local file: - ```shell - ticloud serverless export download -c -e - ``` + ```shell + ticloud serverless export download -c -e + ``` + + For more information about the download command, see [ticloud serverless export download](/tidb-cloud/ticloud-serverless-export-download.md). + + + ### Export data to Amazon S3 + +
+ +1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + +2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +3. On the **Import** page, click **Export Data to** in the upper-right corner, then choose **Amazon S3** from the drop-down list. Fill in the following parameters: + + - **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`. + - **Exported Data**: choose the databases and tables you want to export. + - **Data Format**: choose **SQL File** or **CSV**. + - **Compression**: choose **Gzip**, **Snappy**, **Zstd**, or **None**. + - **Folder URI**: enter the URI of the Amazon S3 with the `s3:///` format. + - **Bucket Access**: choose one of the following access credentials and then fill in the credential information. If you do not have such information, see [Configure External Storage Access for TiDB Serverless](/tidb-cloud/serverless-external-storage.md#configure-amazon-s3-access). + - **AWS Role ARN**: enter the role ARN that has the `s3:PutObject` and `s3:ListBucket` permissions to access the bucket. + - **AWS Access Key**: enter the access key ID and access key secret that have the `s3:PutObject` and `s3:ListBucket` permissions to access the bucket. + +4. Click **Export**. + +
+ +
+ ```shell -ticloud serverless export create -c --bucket-uri --access-key-id --secret-access-key +ticloud serverless export create -c --s3.uri --s3.access-key-id --s3.secret-access-key --filter "database.table" ``` -### Export with the CSV format +- `s3.uri`: the Amazon S3 URI with the `s3:///` format. +- `s3.access-key-id`: the access key ID of the user who has the permission to access the bucket. +- `s3.secret-access-key`: the access key secret of the user who has the permission to access the bucket. ```shell -ticloud serverless export create -c --file-type CSV +ticloud serverless export create -c --s3.uri --s3.role-arn --filter "database.table" ``` -### Export the whole database +- `s3.uri`: the URI of the Amazon S3 bucket with the `s3:///` format. +- `s3.role-arn`: the ARN of the role that has the permission to access the bucket. + +
+
+ +### Export data to Google Cloud Storage + +Currently, you can only export data to Google Cloud Storage using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). ```shell -ticloud serverless export create -c --database +ticloud serverless export create -c --gcs.uri --gcs.service-account-key --filter "database.table" ``` -### Export with snappy compression +- `gcs.uri`: the URI of the Google Cloud Storage bucket with the `gs:///` format. +- `gcs.service-account-key`: the base64 encoded service account key. + +### Export data to Azure Blob Storage + +Currently, you can only export data to Azure Blob Storage using [TiDB Cloud CLI](/tidb-cloud/cli-reference.md). ```shell -ticloud serverless export create -c --compress snappy +ticloud serverless export create -c --azblob.uri --azblob.sas-token --filter "database.table" ``` +- `azblob.uri`: the URI of the Azure Blob Storage with the `azure://.blob.core.windows.net//` format. +- `azblob.sas-token`: the account SAS token of the Azure Blob Storage. + ### Cancel an export task +To cancel an ongoing export task, take the following steps: + + +
+ +1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + > **Tip:** + > + > If you have multiple projects, you can click in the lower-left corner and switch to another project. + +2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +3. On the **Import** page, click **Export** to view the export task list. + +4. Choose the export task you want to cancel, and then click **Action**. + +5. Choose **Cancel** in the drop-down list. Note that you can only cancel the export task that is in the **Running** status. + +
+ +
+ ```shell ticloud serverless export cancel -c -e ``` +
+
+ ## Pricing -The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit) generated during the export process of successful or canceled tasks. For failed export tasks, you will not be charged. \ No newline at end of file +The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit) generated during the export process of successful or canceled tasks. For failed export tasks, you will not be charged. diff --git a/tidb-cloud/serverless-external-storage.md b/tidb-cloud/serverless-external-storage.md new file mode 100644 index 0000000000000..a8c269fc23d96 --- /dev/null +++ b/tidb-cloud/serverless-external-storage.md @@ -0,0 +1,226 @@ +--- +title: Configure TiDB Serverless External Storage Access +summary: Learn how to configure Amazon Simple Storage Service (Amazon S3) access, Google Cloud Storage (GCS) access and Azure Blob Storage access. +--- + +# Configure External Storage Access for TiDB Serverless + +If you want to import data from or export data to an external storage in a TiDB Serverless cluster, you need to configure cross-account access. This document describes how to configure access to an external storage, including Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS) and Azure Blob Storage for TiDB Serverless clusters. + +If you need to configure these external storages for a TiDB Dedicated cluster, see [Configure External Storage for TiDB Dedicated](/tidb-cloud/config-s3-and-gcs-access.md). + +## Configure Amazon S3 access + +To allow a TiDB Serverless cluster to access your Amazon S3 bucket, you need to configure the bucket access for the cluster. You can use either of the following methods to configure the bucket access: + +- Use a Role ARN: use a Role ARN to access your Amazon S3 bucket. +- Use an AWS access key: use the access key of an IAM user to access your Amazon S3 bucket. + + +
+ +It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create a role ARN. Take the following steps to create one: + +1. Open the **Import** page for your target cluster. + + 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/console/clusters) page of your project. + + 2. Click the name of your target cluster to go to its overview page, and then click **Import** in the left navigation pane. + +2. Open the **Add New ARN** dialog. + + - If you want to import data from Amazon S3, open the **Add New ARN** dialog as follows: + + 1. Click **Import from S3**. + 2. Fill in the **File URI** field. + 3. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**. + + - If you want to export data to Amazon S3, open the **Add New ARN** dialog as follows: + + 1. Click **Export data to...** > **Amazon S3**. If your cluster has neither imported nor exported any data before, click **Click here to export data to...** > **Amazon S3** at the bottom of the page. + 2. Fill in the **Folder URI** field. + 3. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**. + +3. Create a role ARN with AWS CloudFormation template. + + 1. In the **Add New ARN** dialog, click **AWS Console with CloudFormation Template**. + + 2. Log in to the [AWS Management Console](https://console.aws.amazon.com/) and you will be redirected to the AWS CloudFormation **Quick create stack** page. + + 3. Fill in the **Role Name**. + + 4. Acknowledge to create a new role and click **Create stack** to create the role ARN. + + 5. After the CloudFormation stack is executed, you can click the **Outputs** tab and find the Role ARN value in the **Value** column. + + ![img.png](/media/tidb-cloud/serverless-external-storage/serverless-role-arn.png) + +If you have any trouble creating a role ARN with AWS CloudFormation, you can take the following steps to create one manually: + +
+Click here to see details + +1. 1. In the **Add New ARN** dialog described in previous instructions, click **Having trouble? Create Role ARN manually**. You will get the **TiDB Cloud Account ID** and **TiDB Cloud External ID**. + +2. In the AWS Management Console, create a managed policy for your Amazon S3 bucket. + + 1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and open the [Amazon S3 console](https://console.aws.amazon.com/s3/). + + 2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use. + + ![Copy bucket ARN](/media/tidb-cloud/copy-bucket-arn.png) + + 3. Open the [IAM console](https://console.aws.amazon.com/iam/), click **Policies** in the left navigation pane, and then click **Create Policy**. + + ![Create a policy](/media/tidb-cloud/aws-create-policy.png) + + 4. On the **Create policy** page, click the **JSON** tab. + + 5. Configure the policy in the policy text field according to your needs. The following is an example that you can use to export data from and import data to a TiDB Serverless cluster. + + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:GetObjectVersion", + "s3:PutObject" + ], + "Resource": "//*" + }, + { + "Sid": "VisualEditor1", + "Effect": "Allow", + "Action": [ + "s3:ListBucket" + ], + "Resource": "" + } + ] + } + ``` + + In the policy text field, replace the following configurations with your own values. + + - `"Resource": "//*"`. For example, + + - If your source data is stored in the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`. + - If your source data is stored in the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`. + + Make sure that `/*` is added to the end of the directory so TiDB Cloud can access all files in this directory. + + - `"Resource": ""`, for example, `"Resource": "arn:aws:s3:::tidb-cloud-source-data"`. + + - If you have enabled AWS Key Management Service key (SSE-KMS) with customer-managed key encryption, make sure the following configuration is included in the policy. `"arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f"` is a sample KMS key of the bucket. + + ``` + { + "Sid": "AllowKMSkey", + "Effect": "Allow", + "Action": [ + "kms:Decrypt" + ], + "Resource": "arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f" + } + ``` + + - If the objects in your bucket have been copied from another encrypted bucket, the KMS key value needs to include the keys of both buckets. For example, `"Resource": ["arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f","arn:aws:kms:ap-northeast-1:495580073302:key/0d7926a7-6ecc-4bf7-a9c1-a38f0faec0cd"]`. + + 6. Click **Next**. + + 7. Set a policy name, add a tag of the policy (optional), and then click **Create policy**. + +3. In the AWS Management Console, create an access role for TiDB Cloud and get the role ARN. + + 1. In the [IAM console](https://console.aws.amazon.com/iam/), click **Roles** in the left navigation pane, and then click **Create role**. + + ![Create a role](/media/tidb-cloud/aws-create-role.png) + + 2. To create a role, fill in the following information: + + - In **Trusted entity type**, select **AWS account**. + - In **An AWS account**, select **Another AWS account**, and then paste the TiDB Cloud account ID to the **Account ID** field. + - In **Options**, click **Require external ID (Best practice when a third party will assume this role)**, and then paste the TiDB Cloud External ID to the **External ID** field. If the role is created without a Require external ID, once the configuration is done for one TiDB cluster in a project, all TiDB clusters in that project can use the same Role ARN to access your Amazon S3 bucket. If the role is created with the account ID and external ID, only the corresponding TiDB cluster can access the bucket. + + 3. Click **Next** to open the policy list, choose the policy you just created, and then click **Next**. + + 4. In **Role details**, set a name for the role, and then click **Create role** in the lower-right corner. After the role is created, the list of roles is displayed. + + 5. In the list of roles, click the name of the role that you just created to go to its summary page, and then you can get the role ARN. + + ![Copy AWS role ARN](/media/tidb-cloud/aws-role-arn.png) + +
+ +
+ +
+ +It is recommended that you use an IAM user (instead of the AWS account root user) to create an access key. + +Take the following steps to configure an access key: + +1. Create an IAM user. For more information, see [creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console). + +2. Use your AWS account ID or account alias, and your IAM user name and password to sign in to [the IAM console](https://console.aws.amazon.com/iam). + +3. Create an access key. For more information, see [creating an access key for an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey). + +> **Note:** +> +> TiDB Cloud does not store your access keys. It is recommended that you [delete the access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) after the import is complete. + +
+
+ +## Configure GCS access + +To allow a TiDB Serverless cluster to access your GCS bucket, you need to configure the GCS access for the bucket. You can use service account key to configure the bucket access: + +Take the following steps to configure a service account key: + +1. Click **CREATE SERVICE ACCOUNT** to create a service account on the Google Cloud [service account page](https://console.cloud.google.com/iam-admin/serviceaccounts). For more information, see [Creating a service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts). + + 1. Enter a service account name. + 2. Enter a description of the service account (Optional). + 3. Click **CREATE AND CONTINUE** to create the service account. + 4. In the `Grant this service account access to project`, choose the [IAM roles](https://cloud.google.com/iam/docs/understanding-roles) with the needed permission. For example, exporting data to a TiDB Serverless cluster needs a role with `storage.objects.create` permission. + 5. Click **Continue** to go to the next step. + 6. Optional: In the `Grant users access to this service account`, choose members that need to [attach the service account to other resources](https://cloud.google.com/iam/docs/attach-service-accounts). + 7. Click **Done** to finish creating the service account. + + ![service-account](/media/tidb-cloud/serverless-external-storage/gcs-service-account.png) + +2. Click the service account and then click **ADD KEY** on the `KEYS` page to create a service account key. + + ![service-account-key](/media/tidb-cloud/serverless-external-storage/gcs-service-account-key.png) + +3. Choose the default `JSON` key type and click the **CREATE** button to download the service account key. + +## Configure Azure Blob Storage access + +To allow TiDB Serverless to access your Azure Blob container, you need to configure the Azure Blob access for the container. You can use a service SAS token to configure the container access: + +Take the following steps to configure a service SAS token: + +1. Click your storage account where the container belongs to on the [Azure Storage account](https://portal.azure.com/#browse/Microsoft.Storage%2FStorageAccounts) page. + +2. On your **Storage account** page, click the **Security+network** and then click the **Shared access signature**. + + ![sas-position](/media/tidb-cloud/serverless-external-storage/azure-sas-position.png) + +3. On the **Shared access signature** page, create a service SAS token with needed permissions as follows. For more information, see [Create a service SAS token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview). + + 1. In the **Allowed services** section, choose the **Blob** service. + 2. In the **Allowed Resource types** section, choose **Container** and **Object**. + 3. In the **Allowed permissions** section, choose the permission as needed. For example, exporting data to a TiDB Serverless cluster needs the **Read** and **Write** permissions. + 4. Adjust the **Start and expiry date/time** as needed. + 5. You can keep the default values for other settings. + + ![sas-create](/media/tidb-cloud/serverless-external-storage/azure-sas-create.png) + +4. Click the **Generate SAS and connection string** button to generate the SAS token. You will specify this token when you create an external stage.