- Introduction
- Quick Start
- Perform a PoC
- Use Your Cluster
- Create a TiDB Cluster
- Connect to Your TiDB Cluster
- Set Up VPC Peering Connections
- Monitor a TiDB Cluster
- Overview
- Built-in Alerting
- Third-Party Monitoring Integrations
- Scale a TiDB Cluster
- Use an HTAP Cluster
- Back Up and Restore Data
- Tune Performance
- Upgrade a TiDB Cluster
- Delete a TiDB Cluster
- Migrate Data
- Import Sample Data
- Migrate Data into TiDB
- Export Data from TiDB
- Manage User Access
- Billing
- Reference
- Support
- FAQs
- Glossary
- Release Notes
Import CSV Files from Amazon S3 or GCS into TiDB Cloud
This document describes how to import uncompressed CSV files from Amazon Simple Storage Service (Amazon S3) or Google Cloud Storage (GCS) into TiDB Cloud.
- If your CSV source files are compressed, you must uncompress the files first before the import.
- To ensure data consistency, TiDB Cloud allows to import CSV files into empty tables only. To import data into an existing table that already contains data, you can use TiDB Cloud to import the data into a temporary empty table by following this document, and then use the
INSERT SELECT
statement to copy the data to the target existing table.
Step 1. Prepare the CSV files
If a CSV file is larger than 256 MB, consider splitting the file into smaller files, each with a size around 256 MB.
TiDB Cloud supports importing very large CSV files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud can process multiple files in parallel which can greatly improve the import speed.
According to the naming convention of existing objects in your bucket, identify a text pattern that matches the names of the CSV files to be imported.
For example, to import all data files in a bucket, you can use the wildcard symbol
*
or*.csv
as a pattern. Similarly, to import the subset of data files in partitionstation=402260
, you can use*station=402260*
as a pattern. Make a note of this pattern as you will need to provide it to TiDB Cloud in Step 4.
Step 2. Create the target table schema
Before importing CSV files into TiDB Cloud, you need to create the target database and table. Alternatively, TiDB Cloud can create these objects for you as part of the import process if you provide the target database and table schema as follows:
In the Amazon S3 or GCS directory where the CSV files are located, create a
${db_name}-schema-create.sql
file that contains theCREATE DATABASE
DDL statement.For example, you can create a
mydb-scehma-create.sql
file that contains the following statement:CREATE DATABASE mydb;
In the Amazon S3 or GCS directory where the CSV files are located, create a
${db_name}.${table_name}-schema.sql
file that contains theCREATE TABLE
DDL statement.For example, you can create a
mydb.mytable-schema.sql
file that contains the following statement:CREATE TABLE mytable ( ID INT, REGION VARCHAR(20), COUNT INT );
NoteThe
${db_name}.${table_name}-schema.sql
file should only contain a single DDL statement. If the file contains multiple DDL statements, only the first statement takes effect.
Step 3. Configure cross-account access
To allow TiDB Cloud to access the CSV files in the Amazon S3 or GCS bucket, do one of the following:
If your organization is using TiDB Cloud as a service on AWS, configure cross-account access to Amazon S3.
Once finished, make a note of the Role ARN value as you will need it in Step 4.
If your organization is using TiDB Cloud as a service on Google Cloud Platform (GCP), configure cross-account access to GCS.
Step 4. Import CSV files to TiDB Cloud
To import the CSV files to TiDB Cloud, take the following steps:
Navigate to the TiDB Clusters page and click the name of your target cluster. The overview page of your target cluster is displayed.
In the cluster information pane on the left, click Import. The Data Import Task page is displayed.
On the Data Import Task page, provide the following information.
Data Source Type: select the type of the data source.
Bucket URL: select the bucket URL where your CSV files are located.
Bucket Region: select the region where your bucket is located.
Data Format: select CSV.
Setup Credentials (This field is visible only for AWS S3): enter the Role ARN value for Role-ARN.
CSV Configuration: check and update the CSV specific configurations, including separator, delimiter, header, not-null, null, backslash-escape, and trim-last-separator. You can find the explanation of each CSV configuration right beside these fields.
NoteFor the configurations of separator, delimiter, and null, you can use both alphanumeric characters and certain special characters. The supported special characters include
\t
,\b
,\n
,\r
,\f
, and\u0001
.Target Database: fill in the Username and Password fields.
DB/Tables Filter: if necessary, you can specify a table filter. Currently, TiDB Cloud only supports one table filter rule.
Object Name Pattern: enter a pattern that matches the names of the CSV files to be imported. For example,
my-data.csv
.Target Table Name: enter the name of the target table. For example,
mydb.mytable
.
Click Import to start the import task.
When the import progress shows success, check the number after Total Files:.
If the number is zero, it means no data files matched the value you entered in the Object Name Pattern field. In this case, ensure that there are no typos in the Object Name Pattern field and try again.
When running an import task, if any unsupported or invalid conversions are detected, TiDB Cloud terminates the import job automatically and reports an importing error.
If you get an importing error, do the following:
- Drop the partially imported table.
- Check the table schema file. If there are any errors, correct the table schema file.
- Check the data types in the CSV files.
- Try the import task again.