Import Apache Parquet Files from Amazon S3, GCS, or Azure Blob Storage into TiDB Cloud Serverless
You can import both uncompressed and Snappy compressed Apache Parquet format data files to TiDB Cloud Serverless. This document describes how to import Parquet files from Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), or Azure Blob Storage into TiDB Cloud Serverless.
Step 1. Prepare the Parquet files
If a Parquet file is larger than 256 MB, consider splitting it into smaller files, each with a size around 256 MB.
TiDB Cloud Serverless supports importing very large Parquet files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud Serverless can process multiple files in parallel, which can greatly improve the import speed.
Name the Parquet files as follows:
- If a Parquet file contains all data of an entire table, name the file in the
${db_name}.${table_name}.parquet
format, which maps to the${db_name}.${table_name}
table when you import the data. - If the data of one table is separated into multiple Parquet files, append a numeric suffix to these Parquet files. For example,
${db_name}.${table_name}.000001.parquet
and${db_name}.${table_name}.000002.parquet
. The numeric suffixes can be inconsecutive but must be in ascending order. You also need to add extra zeros before the number to ensure all the suffixes are in the same length.
- If a Parquet file contains all data of an entire table, name the file in the
Step 2. Create the target table schemas
Because Parquet files do not contain schema information, before importing data from Parquet files into TiDB Cloud Serverless, you need to create the table schemas using either of the following methods:
Method 1: In TiDB Cloud Serverless, create the target databases and tables for your source data.
Method 2: In the Amazon S3, GCS, or Azure Blob Storage directory where the Parquet files are located, create the target table schema files for your source data as follows:
Create database schema files for your source data.
If your Parquet files follow the naming rules in Step 1, the database schema files are optional for the data import. Otherwise, the database schema files are mandatory.
Each database schema file must be in the
${db_name}-schema-create.sql
format and contain aCREATE DATABASE
DDL statement. With this file, TiDB Cloud Serverless will create the${db_name}
database to store your data when you import the data.For example, if you create a
mydb-scehma-create.sql
file that contains the following statement, TiDB Cloud Serverless will create themydb
database when you import the data.CREATE DATABASE mydb;Create table schema files for your source data.
If you do not include the table schema files in the Amazon S3, GCS, or Azure Blob Storage directory where the Parquet files are located, TiDB Cloud Serverless will not create the corresponding tables for you when you import the data.
Each table schema file must be in the
${db_name}.${table_name}-schema.sql
format and contain aCREATE TABLE
DDL statement. With this file, TiDB Cloud Serverless will create the${db_table}
table in the${db_name}
database when you import the data.For example, if you create a
mydb.mytable-schema.sql
file that contains the following statement, TiDB Cloud Serverless will create themytable
table in themydb
database when you import the data.CREATE TABLE mytable ( ID INT, REGION VARCHAR(20), COUNT INT );
Step 3. Configure cross-account access
To allow TiDB Cloud Serverless to access the Parquet files in the Amazon S3, GCS, or Azure Blob Storage bucket, do one of the following:
If your Parquet files are located in Amazon S3, configure external storage access for TiDB Cloud Serverless.
You can use either an AWS access key or a Role ARN to access your bucket. Once finished, make a note of the access key (including the access key ID and secret access key) or the Role ARN value as you will need it in Step 4.
If your Parquet files are located in GCS, configure external storage access for TiDB Cloud Serverless.
If your Parquet files are located in Azure Blob Storage, configure external storage access for TiDB Cloud Serverless.
Step 4. Import Parquet files to TiDB Cloud Serverless
To import the Parquet files to TiDB Cloud Serverless, take the following steps:
- Amazon S3
- Google Cloud
- Azure Blob Storage
Open the Import page for your target cluster.
Log in to the TiDB Cloud console and navigate to the Clusters page of your project.
Click the name of your target cluster to go to its overview page, and then click Import in the left navigation pane.
Select Import data from Cloud Storage, and then click Amazon S3.
On the Import Data from Amazon S3 page, provide the following information for the source Parquet files:
- Import File Count: select One file or Multiple files as needed.
- Included Schema Files: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select Yes. Otherwise, select No.
- Data Format: select Parquet.
- File URI or Folder URI:
- When importing one file, enter the source file URI and name in the following format
s3://[bucket_name]/[data_source_folder]/[file_name].parquet
. For example,s3://sampledata/ingest/TableName.01.parquet
. - When importing multiple files, enter the source file URI and name in the following format
s3://[bucket_name]/[data_source_folder]/
. For example,s3://sampledata/ingest/
.
- When importing one file, enter the source file URI and name in the following format
- Bucket Access: you can use either an AWS Role ARN or an AWS access key to access your bucket. For more information, see Configure Amazon S3 access.
- AWS Role ARN: enter the AWS Role ARN value.
- AWS Access Key: enter the AWS access key ID and AWS secret access key.
Click Connect.
In the Destination section, select the target database and table.
When importing multiple files, you can use Advanced Settings > Mapping Settings to define a custom mapping rule for each target table and its corresponding Parquet file. After that, the data source files will be re-scanned using the provided custom mapping rule.
When you enter the source file URI and name in Source File URIs and Names, make sure it is in the following format
s3://[bucket_name]/[data_source_folder]/[file_name].parquet
. For example,s3://sampledata/ingest/TableName.01.parquet
.You can also use wildcards to match the source files. For example:
s3://[bucket_name]/[data_source_folder]/my-data?.parquet
: all Parquet files starting withmy-data
followed by one character (such asmy-data1.parquet
andmy-data2.parquet
) in that folder will be imported into the same target table.s3://[bucket_name]/[data_source_folder]/my-data*.parquet
: all Parquet files in the folder starting withmy-data
will be imported into the same target table.
Note that only
?
and*
are supported.Click Start Import.
When the import progress shows Completed, check the imported tables.
Open the Import page for your target cluster.
Log in to the TiDB Cloud console and navigate to the Clusters page of your project.
Click the name of your target cluster to go to its overview page, and then click Import in the left navigation pane.
Select Import data from Cloud Storage, and then click Google Cloud Storage.
On the Import Data from Google Cloud Storage page, provide the following information for the source Parquet files:
- Import File Count: select One file or Multiple files as needed.
- Included Schema Files: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select Yes. Otherwise, select No.
- Data Format: select Parquet.
- File URI or Folder URI:
- When importing one file, enter the source file URI and name in the following format
[gcs|gs]://[bucket_name]/[data_source_folder]/[file_name].parquet
. For example,[gcs|gs]://sampledata/ingest/TableName.01.parquet
. - When importing multiple files, enter the source file URI and name in the following format
[gcs|gs]://[bucket_name]/[data_source_folder]/
. For example,[gcs|gs]://sampledata/ingest/
.
- When importing one file, enter the source file URI and name in the following format
- Bucket Access: you can use a GCS IAM Role to access your bucket. For more information, see Configure GCS access.
Click Connect.
In the Destination section, select the target database and table.
When importing multiple files, you can use Advanced Settings > Mapping Settings to define a custom mapping rule for each target table and its corresponding Parquet file. After that, the data source files will be re-scanned using the provided custom mapping rule.
When you enter the source file URI and name in Source File URIs and Names, make sure it is in the following format
[gcs|gs]://[bucket_name]/[data_source_folder]/[file_name].parquet
. For example,[gcs|gs]://sampledata/ingest/TableName.01.parquet
.You can also use wildcards to match the source files. For example:
[gcs|gs]://[bucket_name]/[data_source_folder]/my-data?.parquet
: all Parquet files starting withmy-data
followed by one character (such asmy-data1.parquet
andmy-data2.parquet
) in that folder will be imported into the same target table.[gcs|gs]://[bucket_name]/[data_source_folder]/my-data*.parquet
: all Parquet files in the folder starting withmy-data
will be imported into the same target table.
Note that only
?
and*
are supported.Click Start Import.
When the import progress shows Completed, check the imported tables.
Open the Import page for your target cluster.
Log in to the TiDB Cloud console and navigate to the Clusters page of your project.
Click the name of your target cluster to go to its overview page, and then click Import in the left navigation pane.
Select Import data from Cloud Storage, and then click Azure Blob Storage.
On the Import Data from Azure Blob Storage page, provide the following information for the source Parquet files:
- Import File Count: select One file or Multiple files as needed.
- Included Schema Files: this field is only visible when importing multiple files. If the source folder contains the target table schemas, select Yes. Otherwise, select No.
- Data Format: select Parquet.
- File URI or Folder URI:
- When importing one file, enter the source file URI and name in the following format
[azure|https]://[bucket_name]/[data_source_folder]/[file_name].parquet
. For example,[azure|https]://sampledata/ingest/TableName.01.parquet
. - When importing multiple files, enter the source file URI and name in the following format
[azure|https]://[bucket_name]/[data_source_folder]/
. For example,[azure|https]://sampledata/ingest/
.
- When importing one file, enter the source file URI and name in the following format
- Bucket Access: you can use a shared access signature (SAS) token to access your bucket. For more information, see Configure Azure Blob Storage access.
Click Connect.
In the Destination section, select the target database and table.
When importing multiple files, you can use Advanced Settings > Mapping Settings to define a custom mapping rule for each target table and its corresponding Parquet file. After that, the data source files will be re-scanned using the provided custom mapping rule.
When you enter the source file URI and name in Source File URIs and Names, make sure it is in the following format
[azure|https]://[bucket_name]/[data_source_folder]/[file_name].parquet
. For example,[azure|https]://sampledata/ingest/TableName.01.parquet
.You can also use wildcards to match the source files. For example:
[azure|https]://[bucket_name]/[data_source_folder]/my-data?.parquet
: all Parquet files starting withmy-data
followed by one character (such asmy-data1.parquet
andmy-data2.parquet
) in that folder will be imported into the same target table.[azure|https]://[bucket_name]/[data_source_folder]/my-data*.parquet
: all Parquet files in the folder starting withmy-data
will be imported into the same target table.
Note that only
?
and*
are supported.Click Start Import.
When the import progress shows Completed, check the imported tables.
When you run an import task, if any unsupported or invalid conversions are detected, TiDB Cloud Serverless terminates the import job automatically and reports an importing error.
If you get an importing error, do the following:
Drop the partially imported table.
Check the table schema file. If there are any errors, correct the table schema file.
Check the data types in the Parquet files.
If the Parquet files contain any unsupported data types (for example,
NEST STRUCT
,ARRAY
, orMAP
), you need to regenerate the Parquet files using supported data types (for example,STRING
).Try the import task again.
Supported data types
The following table lists the supported Parquet data types that can be imported to TiDB Cloud Serverless.
Parquet Primitive Type | Parquet Logical Type | Types in TiDB or MySQL |
---|---|---|
DOUBLE | DOUBLE | DOUBLE FLOAT |
FIXED_LEN_BYTE_ARRAY(9) | DECIMAL(20,0) | BIGINT UNSIGNED |
FIXED_LEN_BYTE_ARRAY(N) | DECIMAL(p,s) | DECIMAL NUMERIC |
INT32 | DECIMAL(p,s) | DECIMAL NUMERIC |
INT32 | N/A | INT MEDIUMINT YEAR |
INT64 | DECIMAL(p,s) | DECIMAL NUMERIC |
INT64 | N/A | BIGINT INT UNSIGNED MEDIUMINT UNSIGNED |
INT64 | TIMESTAMP_MICROS | DATETIME TIMESTAMP |
BYTE_ARRAY | N/A | BINARY BIT BLOB CHAR LINESTRING LONGBLOB MEDIUMBLOB MULTILINESTRING TINYBLOB VARBINARY |
BYTE_ARRAY | STRING | ENUM DATE DECIMAL GEOMETRY GEOMETRYCOLLECTION JSON LONGTEXT MEDIUMTEXT MULTIPOINT MULTIPOLYGON NUMERIC POINT POLYGON SET TEXT TIME TINYTEXT VARCHAR |
SMALLINT | N/A | INT32 |
SMALLINT UNSIGNED | N/A | INT32 |
TINYINT | N/A | INT32 |
TINYINT UNSIGNED | N/A | INT32 |
Troubleshooting
Resolve warnings during data import
After clicking Start Import, if you see a warning message such as can't find the corresponding source files
, resolve this by providing the correct source file, renaming the existing one according to Naming Conventions for Data Import, or using Advanced Settings to make changes.
After resolving these issues, you need to import the data again.
Zero rows in the imported tables
After the import progress shows Completed, check the imported tables. If the number of rows is zero, it means no data files matched the Bucket URI that you entered. In this case, resolve this issue by providing the correct source file, renaming the existing one according to Naming Conventions for Data Import, or using Advanced Settings to make changes. After that, import those tables again.