- Key Features
- Horizontal Scalability
- MySQL Compatible Syntax
- Replicate from and to MySQL
- Distributed Transactions with Strong Consistency
- Cloud Native Architecture
- Minimize ETL with HTAP
- Fault Tolerance & Recovery with Raft
- Automatic Rebalancing
- Deployment and Orchestration with Ansible, Kubernetes, Docker
- JSON Support
- Spark Integration
- Read Historical Data Without Restoring from Backup
- Fast Import and Restore of Data
- Hybrid of Column and Row Storage
- SQL Plan Management
- Open Source
- Online Schema Changes
- Key Features
- Get Started
- From Binary Tarball
- Orchestrated Deployment
- Geographic Redundancy
- SQL Language Structure
- Data Types
- Numeric Types
- Date and Time Types
- String Types
- Functions and Operators
- Function and Operator Reference
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- List of Expressions for Pushdown
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
CREATE TABLE LIKE
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE TABLE
SHOW CREATE USER
SHOW [FULL] FIELDS FROM
SHOW INDEXES [FROM|IN]
SHOW INDEX [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW [FULL] PROCESSSLIST
SHOW [FULL] TABLES
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [GLOBAL|SESSION] VARIABLES
- System Databases
- Garbage Collection (GC)
- Understanding the Query Execution Plan
- The Blocklist of Optimization Rules and Expression Pushdown
- Introduction to Statistics
- TopN and Limit Push Down
- Optimizer Hints
- Check the TiDB Cluster Status Using SQL Statements
- Execution Plan Binding
- Statement Summary Table
- Tune TiKV
- Operating System Tuning
- Column Pruning
- Key Monitoring Metrics
- Best Practices
- TiDB Binlog
- Binlog Consumer Client
- TiDB Binlog Relay Log
- Bidirectional Replication Between TiDB Clusters
- TiDB Lightning
- All Releases
TiDB Lightning is a tool used for fast full import of large amounts of data into a TiDB cluster. Currently, TiDB Lightning supports reading SQL dump exported via Mydumper or CSV data source. You can use it in the following two scenarios:
- Import large amounts of new data quickly
- Back up and restore all the data
The TiDB Lightning tool set consists of two components:
tidb-lightning(the "front end") reads the data source and imports the database structure into the TiDB cluster, and also transforms the data into Key-Value (KV) pairs and sends them to
tikv-importer(the "back end") combines and sorts the KV pairs and then imports these sorted pairs as a whole into the TiKV cluster.
This tutorial assumes you use several new and clean CentOS 7 instances. You can use VMware, VirtualBox or other tools to deploy a virtual machine locally or a small cloud virtual machine on a vendor-supplied platform. Because TiDB Lightning consumes a large amount of computer resources, it is recommended that you allocate at least 4 GB memory for running it.
The deployment method in this tutorial is only recommended for test and trial. Do not apply it in the production or development environment.
mydumper to export data from MySQL:
./bin/mydumper -h 127.0.0.1 -P 3306 -u root -t 16 -F 256 -B test -T t1,t2 --skip-tz-utc -o /data/my_database/
In the above command:
-B test: means the data is exported from the
-T t1,t2: means only the
t2tables are exported.
-t 16: means 16 threads are used to export the data.
-F 256: means a table is partitioned into chunks and one chunk is 256 MB.
--skip-tz-utc: the purpose of adding this parameter is to ignore the inconsistency of time zone setting between MySQL and the data exporting machine and to disable automatic conversion.
After executing this command, the full backup data is exported to the
Before the data import, you need to deploy a TiDB cluster (later than v2.0.9). In this tutorial, TiDB v3.0.4 is used. For the deployment method, refer to TiDB Introduction.
Download the TiDB Lightning installation package from the following link:
Choose the same version of TiDB Lightning as that of the TiDB cluster.
bin/tikv-importerin the package to the server where TiDB Lightning is deployed.
# The template of the tikv-importer configuration file # Log file log-file = "tikv-importer.log" # Log level: "trace", "debug", "info", "warn", "error" or "off" log-level = "info" [server] # The listening address of tikv-importer. tidb-lightning connects to this address for data write. addr = "192.168.20.10:8287" [import] # The directory of the engine file. import-dir = "/mnt/ssd/data.import/"
nohup ./tikv-importer -C tikv-importer.toml > nohup.out &
bin/tidb-lightning-ctlin the installation package to the server where TiDB Lightning is deployed.
Upload the prepared data source to the server.
After configuring the parameters properly, use a
nohupcommand to start the
tidb-lightningprocess. If you directly run the command in the command-line, the process might exit because of the SIGHUP signal received. Instead, it's preferable to run a bash script that contains the
#!/bin/bash nohup ./tidb-lightning \ --importer 172.16.31.10:8287 \ -d /data/my_database/ \ --tidb-host 172.16.31.2 \ --tidb-user root \ --log-file tidb-lightning.log \ > nohup.out &
After the import is completed, TiDB Lightning exits automatically. If the import is successful, you can find
tidb lightning exit in the last line of the log file.
If any error occurs, refer to TiDB Lightning FAQs.
This tutorial briefly introduces what TiDB Lightning is and how to quickly deploy a TiDB Lightning cluster to import full backup data to the TiDB cluster.
For detailed features and usage about TiDB Lightning, refer to TiDB Lightning Overview.