- Docs Home
- About TiDB
- Quick Start
- Software and Hardware Requirements
- Environment Configuration Checklist
- Topology Patterns
- Install and Start
- Verify Cluster Status
- Benchmarks Methods
- Backup and Restore
- Read Historical Data
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Maintain TiDB Using Ansible
- Modify Configuration Online
- Monitor and Alert
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Understanding the Query Execution Plan
- SQL Optimization Process
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Tools
- Use Cases
- TiDB Operator
- Backup & Restore (BR)
- TiDB Binlog
- TiDB Lightning
- TiDB Data Migration
- Cluster Architecture
- Key Monitoring Metrics
- SQL Language Structure and Syntax
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
CREATE [GLOBAL|SESSION] BINDING
CREATE TABLE LIKE
DROP [GLOBAL|SESSION] BINDING
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [GLOBAL|SESSION] BINDINGS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DRAINER STATUS
SHOW [FULL] FIELDS FROM
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW [FULL] PROCESSSLIST
SHOW PUMP STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
- Data Types
- Functions and Operators
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- List of Expressions for Pushdown
- Generated Columns
- SQL Mode
- Garbage Collection (GC)
- Character Set and Collation
- System Tables
- TiDB Dashboard
- Overview Page
- Cluster Info Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Profile Instances Page
- Session Management and Configuration
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- Release Notes
- All Releases
You are viewing the documentation of an older version of the TiDB database (TiDB v4.0).
This page explains the special terms used in TiDB Lightning's logs, monitoring, configurations, and documentation.
Because TiDB Lightning imports data without going through TiDB, the statistics information is not automatically updated. Therefore, TiDB Lightning explicitly analyzes every table after importing. This step can be omitted by setting the
post-restore.analyze configuration to
Every table has an associated
AUTO_INCREMENT_ID counter to provide the default value of an auto-incrementing column. In TiDB, this counter is additionally used to assign row IDs.
Because TiDB Lightning imports data without going through TiDB, the
AUTO_INCREMENT_ID counter is not automatically updated. Therefore, TiDB Lightning explicitly alters
AUTO_INCREMENT_ID to a valid value. This step is always performed, even if the table has no
Back end is the destination where TiDB Lightning sends the parsed result. Also spelled as "backend".
See TiDB Lightning Backends for details.
TiDB Lightning continuously saves its progress into a local file or a remote database while importing. This allows it to resume from an intermediate state should it crashes in the process. See the Checkpoints section for details.
In TiDB Lightning, the checksum of a table is a set of 3 numbers calculated from the content of each KV pair in that table. These numbers are respectively:
- the number of KV pairs,
- total length of all KV pairs, and
- the bitwise-XOR of CRC-64-ECMA value each pair.
TiDB Lightning validates the imported data by comparing the local and remote checksums of every table. The program would stop if any pair does not match. You can skip this check by setting the
post-restore.checksum configuration to
See also the FAQs for how to properly handle checksum mismatch.
A continuous range of source data, normally equivalent to a single file in the data source.
When a file is too large, TiDB Lightning might split a file into multiple chunks.
An operation that merges multiple small SST files into one large SST file, and cleans up the deleted entries. TiKV automatically compacts data in background while TiDB Lightning is importing.
For legacy reasons, you can still configure TiDB Lightning to explicitly trigger a compaction every time a table is imported. However, this is not recommended and the corresponding settings are disabled by default.
See RocksDB's wiki page on Compaction for its technical details.
An engine for sorting actual row data.
When a table is very large, its data is placed into multiple data engines to improve task pipelining and save space of TiKV Importer. By default, a new data engine is opened for every 100 GB of SQL data, which can be configured through the
TiDB Lightning processes multiple data engines concurrently. This is controlled by the
In TiKV Importer, an engine is a RocksDB instance for sorting KV pairs.
TiDB Lightning transfers data to TiKV Importer through engines. It first opens an engine, sends KV pairs to it (with no particular order), and finally closes the engine. The engine sorts the received KV pairs after it is closed. These closed engines can then be further uploaded to the TiKV stores for ingestion.
Engines use TiKV Importer's
import-dir as temporary storage, which are sometimes referred to as "engine files".
A configuration list that specifies which tables to be imported or excluded.
See Table Filter for details.
A configuration that optimizes TiKV for writing at the cost of degraded read speed and space usage.
An engine for sorting indices.
Regardless of number of indices, every table is associated with exactly one index engine.
TiDB Lightning processes multiple index engines concurrently. This is controlled by the
lightning.index-concurrency setting. Since every table has exactly one index engine, this also configures the maximum number of tables to process at the same time.
An operation which inserts the entire content of an SST file into the RocksDB (TiKV) store.
Ingestion is a very fast operation compared with inserting KV pairs one by one. This operation is the determinant factor for the performance of TiDB Lightning.
See RocksDB's wiki page on Creating and Ingesting SST files for its technical details.
Abbreviation of "key-value pair".
A routine which parses SQL or CSV rows to KV pairs. Multiple KV encoders run in parallel to speed up processing.
The checksum of a table calculated by TiDB Lightning itself before sending the KV pairs to TiKV Importer.
The mode where import mode is disabled.
The checksum of a table calculated by TiDB after it has been imported.
An operation that randomly reassigns the leader and the peers of a Region. Scattering ensures that the imported data are distributed evenly among TiKV stores. This reduces stress on PD.
An engine is typically very large (around 100 GB), which is not friendly to TiKV if treated as a single region. TiKV Importer splits an engine into multiple small SST files (configurable by TiKV Importer's
import.region-split-size setting) before uploading.
SST is the abbreviation of "sorted string table". An SST file is RocksDB's (and thus TiKV's) native storage format of a collection of KV pairs.