- Key Features
- Horizontal Scalability
- MySQL Compatible Syntax
- Replicate from and to MySQL
- Distributed Transactions with Strong Consistency
- Cloud Native Architecture
- Minimize ETL with HTAP
- Fault Tolerance & Recovery with Raft
- Automatic Rebalancing
- Deployment and Orchestration with Ansible, Kubernetes, Docker
- JSON Support
- Spark Integration
- Read Historical Data Without Restoring from Backup
- Fast Import and Restore of Data
- Hybrid of Column and Row Storage
- SQL Plan Management
- Open Source
- Online Schema Changes
- Key Features
- Get Started
- From Binary Tarball
- Orchestrated Deployment
- Geographic Redundancy
- SQL Language Structure
- Data Types
- Numeric Types
- Date and Time Types
- String Types
- Functions and Operators
- Function and Operator Reference
- Type Conversion in Expression Evaluation
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- List of Expressions for Pushdown
- SQL Statements
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
CREATE TABLE LIKE
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW CHARACTER SET
SHOW [FULL] COLUMNS FROM
SHOW CREATE TABLE
SHOW CREATE USER
SHOW [FULL] FIELDS FROM
SHOW INDEXES [FROM|IN]
SHOW INDEX [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW [FULL] PROCESSSLIST
SHOW [FULL] TABLES
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [GLOBAL|SESSION] VARIABLES
- System Databases
- Garbage Collection (GC)
- Understanding the Query Execution Plan
- The Blocklist of Optimization Rules and Expression Pushdown
- Introduction to Statistics
- TopN and Limit Push Down
- Optimizer Hints
- Check the TiDB Cluster Status Using SQL Statements
- Execution Plan Binding
- Statement Summary Table
- Tune TiKV
- Operating System Tuning
- Column Pruning
- Key Monitoring Metrics
- Best Practices
- TiDB Binlog
- Binlog Consumer Client
- TiDB Binlog Relay Log
- Bidirectional Replication Between TiDB Clusters
- TiDB Lightning
- All Releases
This document is targeted for users who want to upgrade from TiDB 2.0 or 2.1 to 3.0 versions, or from an earlier 3.0 version to later 3.0 versions. TiDB 3.0 is compatible with TiDB Binlog of the cluster version.
Rolling back to 2.1.x or earlier versions after upgrading is not supported.
Before upgrading to 3.0 from 2.0.6 or earlier versions, check if there are any running DDL operations, especially time-consuming ones like
Add Index. If there are any, wait for the DDL operations to finish before you upgrade.
Parallel DDL is supported in TiDB 2.1 and later versions. Therefore, for clusters with a TiDB version earlier than 2.0.1, rolling update to TiDB 3.0 is not supported. To upgrade, you can choose either of the following two options:
- Stop the cluster and upgrade to 3.0 directly.
- Roll update to 2.0.1 or later 2.0.x versions, and then roll update to the 3.0 version.
Do not execute any DDL statements during the upgrading process, otherwise the undefined behavior error might occur.
If you have installed Ansible and its dependencies, you can skip this step.
TiDB Ansible release-3.0 depends on Ansible 2.4.2 ~ 2.7.11 (
2.4.2 ≦ ansible ≦ 2.7.11, Ansible 2.7.11 recommended) and the Python modules of
jinja2 ≧ 2.9.6 and
jmespath ≧ 0.9.0.
To make it easy to manage dependencies, use
pip to install Ansible and its dependencies. For details, see Install Ansible and its dependencies on the Control Machine. For offline environment, see Install Ansible and its dependencies offline on the Control Machine.
After the installation is finished, you can view the version information using the following command:
pip show jinja2
Name: Jinja2 Version: 2.10
pip show jmespath
Name: jmespath Version: 0.9.0
- You must install Ansible and its dependencies following the above procedures.
- Make sure that the Jinja2 version is correct, otherwise an error occurs when you start Grafana.
- Make sure that the jmespath version is correct, otherwise an error occurs when you perform a rolling update to TiKV.
Log in to the Control Machine using the
tidbuser account and enter the
Back up the
tidb-ansiblefolders of TiDB 2.0, 2.1, or an earlier 3.0 version using the following command:
mv tidb-ansible tidb-ansible-bak
Download the tidb-ansible with the tag corresponding to TiDB 3.0. For more details, See Download TiDB Ansible to the Control Machine. The default folder name is
git clone -b $tag https://github.com/pingcap/tidb-ansible.git
Log in to the Control Machine using the
tidb user account and enter the
inventory.ini file. For IP information, see the
/home/tidb/tidb-ansible-bak/inventory.ini backup file.
Pay special attention to the following variables configuration. For variable meaning, see Description of other variables.
Make sure that
ansible_useris the normal user. For unified privilege management, remote installation using the root user is no longer supported. The default configuration uses the
tidbuser as the SSH remote user and the program running user.
## Connection # ssh via normal user ansible_user = tidb
You can refer to How to configure SSH mutual trust and sudo rules on the Control Machine to automatically configure the mutual trust among hosts.
process_supervisionvariable consistent with that in the previous version. It is recommended to use
# process supervision, [systemd, supervise] process_supervision = systemd
If you need to modify this variable, see How to modify the supervision method of a process from
systemd. Before you upgrade, first use the
/home/tidb/tidb-ansible-bak/backup branch to modify the supervision method of a process.
If you have previously customized the configuration file of TiDB cluster components, refer to the backup file to modify the corresponding configuration file in the
Note the following parameter changes:
In the TiKV configuration,
end-point-concurrencyis changed to three parameters:
readpool: coprocessor: # Notice: if CPU_NUM > 8, default thread pool size for coprocessors # will be set to CPU_NUM * 0.8. # high-concurrency: 8 # normal-concurrency: 8 # low-concurrency: 8
For the cluster topology of multiple TiKV instances (processes) on a single machine, you need to modify the three parameters above.
Recommended configuration: the number of TiKV instances * the parameter value = the number of CPU cores * 0.8.
In the TiKV configuration, the
block-cache-sizeparameter of different CFs is changed to
storage: block-cache: capacity: "1GB"
For the cluster topology of multiple TiKV instances (processes) on a single machine, you need to modify the
capacity= MEM_TOTAL * 0.5 / the number of TiKV instances.
In the TiKV configuration, you need to configure the
tikv_status_portport for the multiple instances on a single machine scenario. Before you configure it, check whether a port conflict exists.
[tikv_servers] TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv1" TiKV1-2 ansible_host=172.16.10.4 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv1" TiKV2-1 ansible_host=172.16.10.5 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv2" TiKV2-2 ansible_host=172.16.10.5 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv2" TiKV3-1 ansible_host=172.16.10.6 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv3" TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv3"
Make sure that
tidb_version = v3.0.x in the
tidb-ansible/inventory.ini file, and then run the following command to download TiDB 3.0 binary to the Control Machine:
process_supervisionvariable uses the default
systemdparameter, perform a rolling update to the TiDB cluster using the following command corresponding to your current TiDB cluster version.
When the TiDB cluster version < 3.0.0, use
When the TiDB cluster version ≧ 3.0.0, use
rolling_update.ymlfor both rolling updates and daily rolling restarts.
process_supervisionvariable uses the
superviseparameter, perform a rolling update to the TiDB cluster using
rolling_update.yml, no matter what version the current TiDB cluster is.
- TiDB 3.0 Upgrade Guide
- Upgrade caveat
- Step 1: Install Ansible and dependencies on the Control Machine
- Step 2: Download TiDB Ansible to the Control Machine
- Step 3: Edit the inventory.ini file and the configuration file
- Step 4: Download TiDB 3.0 binary to the Control Machine
- Step 5: Perform a rolling update to TiDB cluster components
- Step 6: Perform a rolling update to TiDB monitoring components