TiDB 3.0 Upgrade Guide
This document is targeted for users who want to upgrade from TiDB 2.0 or 2.1 to 3.0 versions, or from an earlier 3.0 version to later 3.0 versions. TiDB 3.0 is compatible with TiDB Binlog of the cluster version.
Rolling back to 2.1.x or earlier versions after upgrading is not supported.
Before upgrading to 3.0 from 2.0.6 or earlier versions, check if there are any running DDL operations, especially time-consuming ones like
Add Index. If there are any, wait for the DDL operations to finish before you upgrade.
Parallel DDL is supported in TiDB 2.1 and later versions. Therefore, for clusters with a TiDB version earlier than 2.0.1, rolling update to TiDB 3.0 is not supported. To upgrade, you can choose either of the following two options:
- Stop the cluster and upgrade to 3.0 directly.
- Roll update to 2.0.1 or later 2.0.x versions, and then roll update to the 3.0 version.
Step 1: Install Ansible and dependencies on the Control Machine
TiDB Ansible release-3.0 depends on Ansible 2.4.2 ~ 2.7.11 (
2.4.2 ≦ ansible ≦ 2.7.11, Ansible 2.7.11 recommended) and the Python modules of
jinja2 ≧ 2.9.6 and
jmespath ≧ 0.9.0.
To make it easy to manage dependencies, use
pip to install Ansible and its dependencies. For details, see Install Ansible and its dependencies on the Control Machine. For offline environment, see Install Ansible and its dependencies offline on the Control Machine.
After the installation is finished, you can view the version information using the following command:
pip show jinja2
Name: Jinja2 Version: 2.10
pip show jmespath
Name: jmespath Version: 0.9.0
Step 2: Download TiDB Ansible to the Control Machine
Log in to the Control Machine using the
tidbuser account and enter the
Back up the
tidb-ansiblefolders of TiDB 2.0, 2.1, or an earlier 3.0 version using the following command:
mv tidb-ansible tidb-ansible-bak
Download the tidb-ansible with the tag corresponding to TiDB 3.0. For more details, See Download TiDB Ansible to the Control Machine. The default folder name is
git clone -b $tag https://github.com/pingcap/tidb-ansible.git
Step 3: Edit the
inventory.ini file and the configuration file
Log in to the Control Machine using the
tidb user account and enter the
inventory.ini file. For IP information, see the
/home/tidb/tidb-ansible-bak/inventory.ini backup file.
Make sure that
ansible_useris the normal user. For unified privilege management, remote installation using the root user is no longer supported. The default configuration uses the
tidbuser as the SSH remote user and the program running user.
## Connection # ssh via normal user ansible_user = tidb
You can refer to How to configure SSH mutual trust and sudo rules on the Control Machine to automatically configure the mutual trust among hosts.
process_supervisionvariable consistent with that in the previous version. It is recommended to use
# process supervision, [systemd, supervise] process_supervision = systemd
If you need to modify this variable, see How to modify the supervision method of a process from
systemd. Before you upgrade, first use the
/home/tidb/tidb-ansible-bak/backup branch to modify the supervision method of a process.
Edit the configuration file of TiDB cluster components
If you have previously customized the configuration file of TiDB cluster components, refer to the backup file to modify the corresponding configuration file in the
Note the following parameter changes:
In the TiKV configuration,
end-point-concurrencyis changed to three parameters:
readpool: coprocessor: # Notice: if CPU_NUM > 8, default thread pool size for coprocessors # will be set to CPU_NUM * 0.8. # high-concurrency: 8 # normal-concurrency: 8 # low-concurrency: 8
Recommended configuration: the number of TiKV instances * the parameter value = the number of CPU cores * 0.8.
In the TiKV configuration, the
block-cache-sizeparameter of different CFs is changed to
storage: block-cache: capacity: "1GB"
capacity= MEM_TOTAL * 0.5 / the number of TiKV instances.
In the TiKV configuration, you need to configure the
tikv_status_portport for the multiple instances on a single machine scenario. Before you configure it, check whether a port conflict exists.
[tikv_servers] TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv1" TiKV1-2 ansible_host=172.16.10.4 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv1" TiKV2-1 ansible_host=172.16.10.5 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv2" TiKV2-2 ansible_host=172.16.10.5 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv2" TiKV3-1 ansible_host=172.16.10.6 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv3" TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv3"
Step 4: Download TiDB 3.0 binary to the Control Machine
Make sure that
tidb_version = v3.0.x in the
tidb-ansible/inventory.ini file, and then run the following command to download TiDB 3.0 binary to the Control Machine:
Step 5: Perform a rolling update to TiDB cluster components
process_supervisionvariable uses the default
systemdparameter, perform a rolling update to the TiDB cluster using the following command corresponding to your current TiDB cluster version.
When the TiDB cluster version < 3.0.0, use
When the TiDB cluster version ≧ 3.0.0, use
rolling_update.ymlfor both rolling updates and daily rolling restarts.
process_supervisionvariable uses the
superviseparameter, perform a rolling update to the TiDB cluster using
rolling_update.yml, no matter what version the current TiDB cluster is.
Step 6: Perform a rolling update to TiDB monitoring components