📣

TiDB Cloud Serverless is now
TiDB Cloud Starter
! Same experience, new name.
Try it out →

Import Apache Parquet Files from Cloud Storage into TiDB Cloud Starter or Essential

You can import both uncompressed and Snappy compressed Apache Parquet format data files to TiDB Cloud Starter or TiDB Cloud Essential. This document describes how to import Parquet files from Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Azure Blob Storage, or Alibaba Cloud Object Storage Service (OSS) into TiDB Cloud Starter or TiDB Cloud Essential.

Step 1. Prepare the Parquet files

  1. If a Parquet file is larger than 256 MB, consider splitting it into smaller files, each with a size around 256 MB.

    TiDB Cloud supports importing very large Parquet files but performs best with multiple input files around 256 MB in size. This is because TiDB Cloud can process multiple files in parallel, which can greatly improve the import speed.

  2. Name the Parquet files as follows:

    • If a Parquet file contains all data of an entire table, name the file in the ${db_name}.${table_name}.parquet format, which maps to the ${db_name}.${table_name} table when you import the data.
    • If the data of one table is separated into multiple Parquet files, append a numeric suffix to these Parquet files. For example, ${db_name}.${table_name}.000001.parquet and ${db_name}.${table_name}.000002.parquet. The numeric suffixes can be inconsecutive but must be in ascending order. You also need to add extra zeros before the number to ensure all the suffixes are in the same length.

Step 2. Create the target table schemas

Because Parquet files do not contain schema information, before importing data from Parquet files into TiDB Cloud, you need to create the table schemas using either of the following methods:

  • Method 1: In TiDB Cloud, create the target databases and tables for your source data.

  • Method 2: In the Amazon S3, GCS, Azure Blob Storage, or Alibaba Cloud Object Storage Service directory where the Parquet files are located, create the target table schema files for your source data as follows:

    1. Create database schema files for your source data.

      If your Parquet files follow the naming rules in Step 1, the database schema files are optional for the data import. Otherwise, the database schema files are mandatory.

      Each database schema file must be in the ${db_name}-schema-create.sql format and contain a CREATE DATABASE DDL statement. With this file, TiDB Cloud will create the ${db_name} database to store your data when you import the data.

      For example, if you create a mydb-scehma-create.sql file that contains the following statement, TiDB Cloud will create the mydb database when you import the data.

      CREATE DATABASE mydb;
    2. Create table schema files for your source data.

      If you do not include the table schema files in the Amazon S3, GCS, Azure Blob Storage, or Alibaba Cloud Object Storage Service directory where the Parquet files are located, TiDB Cloud will not create the corresponding tables for you when you import the data.

      Each table schema file must be in the ${db_name}.${table_name}-schema.sql format and contain a CREATE TABLE DDL statement. With this file, TiDB Cloud will create the ${db_table} table in the ${db_name} database when you import the data.

      For example, if you create a mydb.mytable-schema.sql file that contains the following statement, TiDB Cloud will create the mytable table in the mydb database when you import the data.

      CREATE TABLE mytable ( ID INT, REGION VARCHAR(20), COUNT INT );

Step 3. Configure cross-account access

To allow TiDB Cloud to access the Parquet files in the Amazon S3, GCS, Azure Blob Storage, or Alibaba Cloud Object Storage Service bucket, do one of the following:

Step 4. Import Parquet files

To import the Parquet files to TiDB Cloud Starter or TiDB Cloud Essential, take the following steps:

    1. Open the Import page for your target cluster.

      1. Log in to the TiDB Cloud console and navigate to the Clusters page of your project.

      2. Click the name of your target cluster to go to its overview page, and then click Data > Import in the left navigation pane.

    2. Click Import data from Cloud Storage.

    3. On the Import Data from Cloud Storage page, provide the following information:

      • Storage Provider: select Amazon S3.
      • Source Files URI:
        • When importing one file, enter the source file URI in the following format s3://[bucket_name]/[data_source_folder]/[file_name].parquet. For example, s3://sampledata/ingest/TableName.01.parquet.
        • When importing multiple files, enter the source folder URI in the following format s3://[bucket_name]/[data_source_folder]/. For example, s3://sampledata/ingest/.
      • Credential: you can use either an AWS Role ARN or an AWS access key to access your bucket. For more information, see Configure Amazon S3 access.
        • AWS Role ARN: enter the AWS Role ARN value.
        • AWS Access Key: enter the AWS access key ID and AWS secret access key.
    4. Click Next.

    5. In the Destination Mapping section, specify how source files are mapped to target tables.

      When a directory is specified in Source Files URI, the Use File naming conventions for automatic mapping option is selected by default.

      • To let TiDB Cloud automatically map all source files that follow the File naming conventions to their corresponding tables, keep this option selected and select Parquet as the data format.

      • To manually configure the mapping rules to associate your source Parquet files with the target database and table, unselect this option, and then fill in the following fields:

        • Source: enter the file name pattern in the [file_name].parquet format. For example: TableName.01.parquet. You can also use wildcards to match multiple files. Only * and ? wildcards are supported.

          • my-data?.parquet: matches all Parquet files that start with my-data followed by a single character, such as my-data1.parquet and my-data2.parquet.
          • my-data*.parquet: matches all Parquet files that start with my-data, such as my-data-2023.parquet and my-data-final.parquet.
        • Target Database and Target Table: select the target database and table to import the data to.

    6. Click Next. TiDB Cloud scans the source files accordingly.

    7. Review the scan results, check the data files found and corresponding target tables, and then click Start Import.

    8. When the import progress shows Completed, check the imported tables.

    1. Open the Import page for your target cluster.

      1. Log in to the TiDB Cloud console and navigate to the Clusters page of your project.

      2. Click the name of your target cluster to go to its overview page, and then click Data > Import in the left navigation pane.

    2. Click Import data from Cloud Storage.

    3. On the Import Data from Cloud Storage page, provide the following information:

      • Storage Provider: select Google Cloud Storage.
      • Source Files URI:
        • When importing one file, enter the source file URI in the following format [gcs|gs]://[bucket_name]/[data_source_folder]/[file_name].parquet. For example, [gcs|gs]://sampledata/ingest/TableName.01.parquet.
        • When importing multiple files, enter the source folder URI in the following format [gcs|gs]://[bucket_name]/[data_source_folder]/. For example, [gcs|gs]://sampledata/ingest/.
      • Credential: you can use a GCS IAM Role Service Account key to access your bucket. For more information, see Configure GCS access.
    4. Click Next.

    5. In the Destination Mapping section, specify how source files are mapped to target tables.

      When a directory is specified in Source Files URI, the Use File naming conventions for automatic mapping option is selected by default.

      • To let TiDB Cloud automatically map all source files that follow the File naming conventions to their corresponding tables, keep this option selected and select Parquet as the data format.

      • To manually configure the mapping rules to associate your source Parquet files with the target database and table, unselect this option, and then fill in the following fields:

        • Source: enter the file name pattern in the [file_name].parquet format. For example: TableName.01.parquet. You can also use wildcards to match multiple files. Only * and ? wildcards are supported.

          • my-data?.parquet: matches all Parquet files that start with my-data followed by a single character, such as my-data1.parquet and my-data2.parquet.
          • my-data*.parquet: matches all Parquet files that start with my-data, such as my-data-2023.parquet and my-data-final.parquet.
        • Target Database and Target Table: select the target database and table to import the data to.

    6. Click Next. TiDB Cloud scans the source files accordingly.

    7. Review the scan results, check the data files found and corresponding target tables, and then click Start Import.

    8. When the import progress shows Completed, check the imported tables.

    1. Open the Import page for your target cluster.

      1. Log in to the TiDB Cloud console and navigate to the Clusters page of your project.

      2. Click the name of your target cluster to go to its overview page, and then click Data > Import in the left navigation pane.

    2. Click Import data from Cloud Storage.

    3. On the Import Data from Cloud Storage page, provide the following information:

      • Storage Provider: select Azure Blob Storage.
      • Source Files URI:
        • When importing one file, enter the source file URI in the following format [azure|https]://[bucket_name]/[data_source_folder]/[file_name].parquet. For example, [azure|https]://sampledata/ingest/TableName.01.parquet.
        • When importing multiple files, enter the source folder URI in the following format [azure|https]://[bucket_name]/[data_source_folder]/. For example, [azure|https]://sampledata/ingest/.
      • Credential: you can use a shared access signature (SAS) token to access your bucket. For more information, see Configure Azure Blob Storage access.
    4. Click Next.

    5. In the Destination Mapping section, specify how source files are mapped to target tables.

      When a directory is specified in Source Files URI, the Use File naming conventions for automatic mapping option is selected by default.

      • To let TiDB Cloud automatically map all source files that follow the File naming conventions to their corresponding tables, keep this option selected and select Parquet as the data format.

      • To manually configure the mapping rules to associate your source Parquet files with the target database and table, unselect this option, and then fill in the following fields:

        • Source: enter the file name pattern in the [file_name].parquet format. For example: TableName.01.parquet. You can also use wildcards to match multiple files. Only * and ? wildcards are supported.

          • my-data?.parquet: matches all Parquet files that start with my-data followed by a single character, such as my-data1.parquet and my-data2.parquet.
          • my-data*.parquet: matches all Parquet files that start with my-data, such as my-data-2023.parquet and my-data-final.parquet.
        • Target Database and Target Table: select the target database and table to import the data to.

    6. Click Next. TiDB Cloud scans the source files accordingly.

    7. Review the scan results, check the data files found and corresponding target tables, and then click Start Import.

    8. When the import progress shows Completed, check the imported tables.

    1. Open the Import page for your target cluster.

      1. Log in to the TiDB Cloud console and navigate to the Clusters page of your project.

      2. Click the name of your target cluster to go to its overview page, and then click Data > Import in the left navigation pane.

    2. Click Import data from Cloud Storage.

    3. On the Import Data from Cloud Storage page, provide the following information:

      • Storage Provider: select Alibaba Cloud OSS.
      • Source Files URI:
        • When importing one file, enter the source file URI in the following format oss://[bucket_name]/[data_source_folder]/[file_name].parquet. For example, oss://sampledata/ingest/TableName.01.parquet.
        • When importing multiple files, enter the source folder URI in the following format oss://[bucket_name]/[data_source_folder]/. For example, oss://sampledata/ingest/.
      • Credential: you can use an AccessKey pair to access your bucket. For more information, see Configure Alibaba Cloud Object Storage Service (OSS) access.
    4. Click Next.

    5. In the Destination Mapping section, specify how source files are mapped to target tables.

      When a directory is specified in Source Files URI, the Use File naming conventions for automatic mapping option is selected by default.

      • To let TiDB Cloud automatically map all source files that follow the File naming conventions to their corresponding tables, keep this option selected and select Parquet as the data format.

      • To manually configure the mapping rules to associate your source Parquet files with the target database and table, unselect this option, and then fill in the following fields:

        • Source: enter the file name pattern in the [file_name].parquet format. For example: TableName.01.parquet. You can also use wildcards to match multiple files. Only * and ? wildcards are supported.

          • my-data?.parquet: matches all Parquet files that start with my-data followed by a single character, such as my-data1.parquet and my-data2.parquet.
          • my-data*.parquet: matches all Parquet files that start with my-data, such as my-data-2023.parquet and my-data-final.parquet.
        • Target Database and Target Table: select the target database and table to import the data to.

    6. Click Next. TiDB Cloud scans the source files accordingly.

    7. Review the scan results, check the data files found and corresponding target tables, and then click Start Import.

    8. When the import progress shows Completed, check the imported tables.

    When you run an import task, if any unsupported or invalid conversions are detected, TiDB Cloud terminates the import job automatically and reports an importing error.

    If you get an importing error, do the following:

    1. Drop the partially imported table.

    2. Check the table schema file. If there are any errors, correct the table schema file.

    3. Check the data types in the Parquet files.

      If the Parquet files contain any unsupported data types (for example, NEST STRUCT, ARRAY, or MAP), you need to regenerate the Parquet files using supported data types (for example, STRING).

    4. Try the import task again.

    Supported data types

    The following table lists the supported Parquet data types that can be imported to TiDB Cloud Starter and TiDB Cloud Essential.

    Parquet Primitive TypeParquet Logical TypeTypes in TiDB or MySQL
    DOUBLEDOUBLEDOUBLE
    FLOAT
    FIXED_LEN_BYTE_ARRAY(9)DECIMAL(20,0)BIGINT UNSIGNED
    FIXED_LEN_BYTE_ARRAY(N)DECIMAL(p,s)DECIMAL
    NUMERIC
    INT32DECIMAL(p,s)DECIMAL
    NUMERIC
    INT32N/AINT
    MEDIUMINT
    YEAR
    INT64DECIMAL(p,s)DECIMAL
    NUMERIC
    INT64N/ABIGINT
    INT UNSIGNED
    MEDIUMINT UNSIGNED
    INT64TIMESTAMP_MICROSDATETIME
    TIMESTAMP
    BYTE_ARRAYN/ABINARY
    BIT
    BLOB
    CHAR
    LINESTRING
    LONGBLOB
    MEDIUMBLOB
    MULTILINESTRING
    TINYBLOB
    VARBINARY
    BYTE_ARRAYSTRINGENUM
    DATE
    DECIMAL
    GEOMETRY
    GEOMETRYCOLLECTION
    JSON
    LONGTEXT
    MEDIUMTEXT
    MULTIPOINT
    MULTIPOLYGON
    NUMERIC
    POINT
    POLYGON
    SET
    TEXT
    TIME
    TINYTEXT
    VARCHAR
    SMALLINTN/AINT32
    SMALLINT UNSIGNEDN/AINT32
    TINYINTN/AINT32
    TINYINT UNSIGNEDN/AINT32

    Troubleshooting

    Resolve warnings during data import

    After clicking Start Import, if you see a warning message such as can't find the corresponding source files, resolve this by providing the correct source file, renaming the existing one according to Naming Conventions for Data Import, or using Advanced Settings to make changes.

    After resolving these issues, you need to import the data again.

    Zero rows in the imported tables

    After the import progress shows Completed, check the imported tables. If the number of rows is zero, it means no data files matched the Bucket URI that you entered. In this case, resolve this issue by providing the correct source file, renaming the existing one according to Naming Conventions for Data Import, or using Advanced Settings to make changes. After that, import those tables again.

    Was this page helpful?