- Key Features
- Horizontal Scalability
- MySQL Compatible Syntax
- Replicate from and to MySQL
- Distributed Transactions with Strong Consistency
- Cloud Native Architecture
- Minimize ETL with HTAP
- Fault Tolerance & Recovery with Raft
- Automatic Rebalancing
- Deployment and Orchestration with Ansible, Kubernetes, Docker
- JSON Support
- Spark Integration
- Read Historical Data Without Restoring from Backup
- Fast Import and Restore of Data
- Hybrid of Column and Row Storage
- SQL Plan Management
- Open Source
- Online Schema Changes
- Key Features
- Get Started
- From Binary Tarball
- Orchestrated Deployment
- Geographic Redundancy
- SQL Language Structure
- Data Types
- Numeric Types
- Date and Time Types
- String Types
- Functions and Operators
- Function and Operator Reference
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Miscellaneous Functions
- Precision Math
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
CREATE TABLE LIKE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE TABLE
SHOW [FULL] FIELDS FROM
SHOW INDEXES [FROM|IN]
SHOW INDEX [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW [FULL] PROCESSSLIST
SHOW [FULL] TABLES
SHOW TABLE STATUS
SHOW [GLOBAL|SESSION] VARIABLES
- System Databases
- Key Monitoring Metrics
- Best Practices
- TiDB Binlog
- TiDB Lightning
- All Releases
The data migration process described in this document uses TiDB Lightning. The steps are as follows.
Before you start the migration, deploy TiDB Lightning.
- If you choose the Importer-backend to import data, you need to deploy
tikv-importeralong with TiDB Lightning. During the import process, the TiDB cluster cannot provide services. This mode imports data quickly, which is suitable for importing a large amount of data (above the TB level).
This document takes the TiDB-backend as an example. Create the
tidb-lightning.toml configuration file and add the following major configurations in the file:
[mydumper]to the path of the MySQL SQL file.
[mydumper] # Data source directory data-source-dir = "/data/export"
If a corresponding schema already exists in the downstream, set
no-schema=trueto skip the creation of the schema.
Add the configuration of the target TiDB cluster.
[tidb] # The target cluster information. Fill in one address of tidb-server. host = "172.16.31.1" port = 4000 user = "root" password = ""
For other configurations, see TiDB Lightning Configuration.
Run TiDB Lightning to start the import operation. If you start TiDB Lightning by using
nohup directly in the command line, the program might exit because of the
SIGHUP signal. Therefore, it is recommended to write
nohup in a script. For example:
# !/bin/bash nohup ./tidb-lightning -config tidb-lightning.toml > nohup.out &
When the import operation is started, view the progress by the following two ways:
progressin logs, which is updated every 5 minutes by default.
- Access the monitoring dashboard. See Monitor TiDB Lightning for details.