- About TiDB
- Quick Start
- Deploy
- Software and Hardware Requirements
- Environment Configuration Checklist
- Plan Cluster Topology
- Install and Start
- Verify Cluster Status
- Test Cluster Performance
- Migrate
- Overview
- Migrate from MySQL
- Migrate from CSV Files
- Migrate from SQL Files
- Replicate Incremental Data between TiDB Clusters in Real Time
- Maintain
- Upgrade
- Scale
- Backup and Restore
- Use BR Tool (Recommended)
- Configure Time Zone
- Daily Checklist
- Maintain TiFlash
- Maintain TiDB Using TiUP
- Modify Configuration Online
- Monitor and Alert
- Troubleshoot
- TiDB Troubleshooting Map
- Identify Slow Queries
- Analyze Slow Queries
- SQL Diagnostics
- Identify Expensive Queries
- Statement Summary Tables
- Troubleshoot Hotspot Issues
- Troubleshoot Increased Read and Write Latency
- Troubleshoot Cluster Setup
- Troubleshoot High Disk I/O Usage
- Troubleshoot Lock Conflicts
- Troubleshoot TiFlash
- Troubleshoot Write Conflicts in Optimistic Transactions
- Performance Tuning
- System Tuning
- Software Tuning
- SQL Tuning
- Overview
- Understanding the Query Execution Plan
- SQL Optimization Process
- Overview
- Logic Optimization
- Physical Optimization
- Prepare Execution Plan Cache
- Control Execution Plans
- Tutorials
- Multiple Data Centers in One City Deployment
- Three Data Centers in Two Cities Deployment
- Two Data Centers in One City Deployment
- Read Historical Data
- Use Stale Read (Recommended)
- Use the
tidb_snapshot
System Variable
- Best Practices
- Use Placement Rules
- Use Load Base Split
- Use Store Limit
- TiDB Tools
- Overview
- Use Cases
- Download
- TiUP
- TiDB Operator
- Backup & Restore (BR)
- TiDB Binlog
- TiDB Lightning
- TiDB Data Migration
- TiCDC
- Dumpling
- sync-diff-inspector
- TiSpark
- Reference
- Cluster Architecture
- Key Monitoring Metrics
- Secure
- Privileges
- SQL
- SQL Language Structure and Syntax
- SQL Statements
ADD COLUMN
ADD INDEX
ADMIN
ADMIN CANCEL DDL
ADMIN CHECKSUM TABLE
ADMIN CHECK [TABLE|INDEX]
ADMIN SHOW DDL [JOBS|QUERIES]
ALTER DATABASE
ALTER INDEX
ALTER INSTANCE
ALTER TABLE
ALTER USER
ANALYZE TABLE
BACKUP
BEGIN
CHANGE COLUMN
COMMIT
CHANGE DRAINER
CHANGE PUMP
CREATE [GLOBAL|SESSION] BINDING
CREATE DATABASE
CREATE INDEX
CREATE ROLE
CREATE SEQUENCE
CREATE TABLE LIKE
CREATE TABLE
CREATE USER
CREATE VIEW
DEALLOCATE
DELETE
DESC
DESCRIBE
DO
DROP [GLOBAL|SESSION] BINDING
DROP COLUMN
DROP DATABASE
DROP INDEX
DROP ROLE
DROP SEQUENCE
DROP STATS
DROP TABLE
DROP USER
DROP VIEW
EXECUTE
EXPLAIN ANALYZE
EXPLAIN
FLASHBACK TABLE
FLUSH PRIVILEGES
FLUSH STATUS
FLUSH TABLES
GRANT <privileges>
GRANT <role>
INSERT
KILL [TIDB]
LOAD DATA
LOAD STATS
MODIFY COLUMN
PREPARE
RECOVER TABLE
RENAME INDEX
RENAME TABLE
REPLACE
RESTORE
REVOKE <privileges>
REVOKE <role>
ROLLBACK
SELECT
SET DEFAULT ROLE
SET [NAMES|CHARACTER SET]
SET PASSWORD
SET ROLE
SET TRANSACTION
SET [GLOBAL|SESSION] <variable>
SHOW ANALYZE STATUS
SHOW [BACKUPS|RESTORES]
SHOW [GLOBAL|SESSION] BINDINGS
SHOW BUILTINS
SHOW CHARACTER SET
SHOW COLLATION
SHOW [FULL] COLUMNS FROM
SHOW CONFIG
SHOW CREATE SEQUENCE
SHOW CREATE TABLE
SHOW CREATE USER
SHOW DATABASES
SHOW DRAINER STATUS
SHOW ENGINES
SHOW ERRORS
SHOW [FULL] FIELDS FROM
SHOW GRANTS
SHOW INDEX [FROM|IN]
SHOW INDEXES [FROM|IN]
SHOW KEYS [FROM|IN]
SHOW MASTER STATUS
SHOW PLUGINS
SHOW PRIVILEGES
SHOW [FULL] PROCESSSLIST
SHOW PROFILES
SHOW PUMP STATUS
SHOW SCHEMAS
SHOW STATS_HEALTHY
SHOW STATS_HISTOGRAMS
SHOW STATS_META
SHOW STATUS
SHOW TABLE NEXT_ROW_ID
SHOW TABLE REGIONS
SHOW TABLE STATUS
SHOW [FULL] TABLES
SHOW [GLOBAL|SESSION] VARIABLES
SHOW WARNINGS
SHUTDOWN
SPLIT REGION
START TRANSACTION
TABLE
TRACE
TRUNCATE
UPDATE
USE
WITH
- Data Types
- Functions and Operators
- Overview
- Type Conversion in Expression Evaluation
- Operators
- Control Flow Functions
- String Functions
- Numeric Functions and Operators
- Date and Time Functions
- Bit Functions and Operators
- Cast Functions and Operators
- Encryption and Compression Functions
- Information Functions
- JSON Functions
- Aggregate (GROUP BY) Functions
- Window Functions
- Miscellaneous Functions
- Precision Math
- Set Operations
- List of Expressions for Pushdown
- TiDB Specific Functions
- Clustered Indexes
- Constraints
- Generated Columns
- SQL Mode
- Transactions
- Garbage Collection (GC)
- Views
- Partitioning
- Character Set and Collation
- System Tables
mysql
- INFORMATION_SCHEMA
- Overview
ANALYZE_STATUS
CLIENT_ERRORS_SUMMARY_BY_HOST
CLIENT_ERRORS_SUMMARY_BY_USER
CLIENT_ERRORS_SUMMARY_GLOBAL
CHARACTER_SETS
CLUSTER_CONFIG
CLUSTER_HARDWARE
CLUSTER_INFO
CLUSTER_LOAD
CLUSTER_LOG
CLUSTER_SYSTEMINFO
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
DATA_LOCK_WAITS
DDL_JOBS
DEADLOCKS
ENGINES
INSPECTION_RESULT
INSPECTION_RULES
INSPECTION_SUMMARY
KEY_COLUMN_USAGE
METRICS_SUMMARY
METRICS_TABLES
PARTITIONS
PROCESSLIST
SCHEMATA
SEQUENCES
SESSION_VARIABLES
SLOW_QUERY
STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_STORAGE_STATS
TIDB_HOT_REGIONS
TIDB_INDEXES
TIDB_SERVERS_INFO
TIDB_TRX
TIFLASH_REPLICA
TIKV_REGION_PEERS
TIKV_REGION_STATUS
TIKV_STORE_STATUS
USER_PRIVILEGES
VIEWS
METRICS_SCHEMA
- UI
- TiDB Dashboard
- Overview
- Maintain
- Access
- Overview Page
- Cluster Info Page
- Key Visualizer Page
- Metrics Relation Graph
- SQL Statements Analysis
- Slow Queries Page
- Cluster Diagnostics
- Search Logs Page
- Profile Instances Page
- Session Management and Configuration
- FAQ
- CLI
- Command Line Flags
- Configuration File Parameters
- System Variables
- Storage Engines
- Telemetry
- Errors Codes
- Table Filter
- Schedule Replicas by Topology Labels
- FAQs
- Release Notes
- All Releases
- Release Timeline
- v5.2
- v5.1
- v5.0
- v4.0
- v3.1
- v3.0
- v2.1
- v2.0
- v1.0
- Glossary
Upgrade TiDB Using TiUP
This document is targeted for the following upgrade paths:
- Upgrade from TiDB 4.0 versions to TiDB 5.2 versions.
- Upgrade from TiDB 5.0 versions to TiDB 5.2 versions.
- Upgrade from TiDB 5.1 versions to TiDB 5.2 versions.
If your cluster to be upgraded is v3.1 or an earlier version (v3.0 or v2.1), the direct upgrade to v5.2 or its patch versions is not supported. You need to upgrade your cluster first to v4.0 and then to v5.2.
Upgrade caveat
- TiDB currently does not support version downgrade or rolling back to an earlier version after the upgrade.
- For the v4.0 cluster managed using TiDB Ansible, you need to import the cluster to TiUP (
tiup cluster
) for new management according to Upgrade TiDB Using TiUP (v4.0). Then you can upgrade the cluster to v5.2 or its patch versions according to this document. - To update versions earlier than 3.0 to 5.2:
- Update this version to 3.0 using TiDB Ansible.
- Use TiUP (
tiup cluster
) to import the TiDB Ansible configuration. - Update the 3.0 version to 4.0 according to Upgrade TiDB Using TiUP (v4.0).
- Upgrade the cluster to v5.2 according to this document.
- Support upgrading the versions of TiDB Binlog, TiCDC, TiFlash, and other components.
- For detailed compatibility changes of different versions, see the Release Notes of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes.
Do not execute any DDL request during the upgrade, otherwise an undefined behavior issue might occur.
Preparations
This section introduces the preparation works needed before upgrading your TiDB cluster, including upgrading TiUP and the TiUP Cluster component.
Step 1: Upgrade TiUP or TiUP offline mirror
Before upgrading your TiDB cluster, you first need to upgrade TiUP or TiUP mirror.
Upgrade TiUP and TiUP Cluster
If the control machine of the cluster to upgrade cannot access https://tiup-mirrors.pingcap.com
, skip this section and see Upgrade TiUP offline mirror.
Upgrade the TiUP version. It is recommended that the TiUP version is
1.5.0
or later.tiup update --self tiup --version
Upgrade the TiUP Cluster version. It is recommended that the TiUP Cluster version is
1.5.0
or later.tiup update cluster tiup cluster --version
Upgrade TiUP offline mirror
If the cluster to upgrade was deployed not using the offline method, skip this step.
Refer to Deploy a TiDB Cluster Using TiUP - Deploy TiUP offline to download the TiUP mirror of the new version and upload it to the control machine. After executing local_install.sh
, TiUP will complete the overwrite upgrade.
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz
sh tidb-community-server-${version}-linux-amd64/local_install.sh
source /home/tidb/.bash_profile
After the overwrite upgrade, execute the following command to upgrade the TiUP Cluster component.
tiup update cluster
Now, the offline mirror has been upgraded successfully. If an error occurs during TiUP operation after the overwriting, it might be that the manifest
is not updated. You can try rm -rf ~/.tiup/manifests/*
before running TiUP again.
Step 2: Edit TiUP topology configuration file
Skip this step if one of the following situations applies:
- You have not modified the configuration parameters of the original cluster. Or you have modified the configuration parameters using
tiup cluster
but no more modification is needed. - After the upgrade, you want to use v5.2's default parameter values for the unmodified configuration items.
Enter the
vi
editing mode to edit the topology file:tiup cluster edit-config <cluster-name>
Refer to the format of topology configuration template and fill the parameters you want to modify in the
server_configs
section of the topology file.After the modification, enter : + w + q to save the change and exit the editing mode. Enter Y to confirm the change.
Before you upgrade the cluster to v5.2, make sure that the parameters you have modified in v4.0 are compatible in v5.2. For details, see TiKV Configuration File.
The following three TiKV parameters are obsolete in TiDB v5.2. If the following parameters have been configured in your original cluster, you need to delete these parameters through edit-config
:
- pessimistic-txn.enabled
- server.request-batch-enable-cross-command
- server.request-batch-wait-duration
Step 3: Check the health status of the current cluster
To avoid the undefined behaviors or other issues during the upgrade, it is recommended to check the health status of Regions of the current cluster before the upgrade. To do that, you can use the check
sub-command.
tiup cluster check <cluster-name> --cluster
After the command is executed, the "Region status" check result will be output.
- If the result is "All Regions are healthy", all Regions in the current cluster are healthy and you can continue the upgrade.
- If the result is "Regions are not fully healthy: m miss-peer, n pending-peer" with the "Please fix unhealthy regions before other operations." prompt, some Regions in the current cluster are abnormal. You need to troubleshoot the anomalies until the check result becomes "All Regions are healthy". Then you can continue the upgrade.
Perform a rolling upgrade to the TiDB cluster
This section describes how to perform a rolling upgrade to the TiDB cluster and how to verify the version after the upgrade.
Upgrade the TiDB cluster to a specified version
You can upgrade your cluster in one of the two ways: online upgrade and offline upgrade.
By default, TiUP Cluster upgrades the TiDB cluster using the online method, which means that the TiDB cluster can still provide services during the upgrade process. With the online method, the leaders are migrated one by one on each node before the upgrade and restart. Therefore, for a large-scale cluster, it takes a long time to complete the entire upgrade operation.
If your application has a maintenance window for the database to be stopped for maintenance, you can use the offline upgrade method to quickly perform the upgrade operation.
Online upgrade
tiup cluster upgrade <cluster-name> <version>
For example, if you want to upgrade the cluster to v5.2.4:
tiup cluster upgrade <cluster-name> v5.2.4
Performing a rolling upgrade to the cluster will upgrade all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes (300 seconds). The instance is directly stopped after this timeout time.
To perform the upgrade immediately without evicting the leader, specify
--force
in the command above. This method causes performance jitter but not data loss.To keep a stable performance, make sure that all leaders in a TiKV instance are evicted before stopping the instance. You can set
--transfer-timeout
to a larger value, for example,--transfer-timeout 3600
(unit: second).
Offline upgrade
Before the offline upgrade, you first need to stop the entire cluster.
tiup cluster stop <cluster-name>
Use the
upgrade
command with the--offline
option to perform the offline upgrade.tiup cluster upgrade <cluster-name> <version> --offline
After the upgrade, the cluster will not be automatically restarted. You need to use the
start
command to restart it.tiup cluster start <cluster-name>
Verify the cluster version
Execute the display
command to view the latest cluster version TiDB Version
:
tiup cluster display <cluster-name>
Cluster type: tidb
Cluster name: <cluster-name>
Cluster version: v5.2.4
By default, TiUP and TiDB share usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.
FAQ
This section describes common problems encountered when updating the TiDB cluster using TiUP.
If an error occurs and the upgrade is interrupted, how to resume the upgrade after fixing this error?
Re-execute the tiup cluster upgrade
command to resume the upgrade. The upgrade operation restarts the nodes that have been previously upgraded. If you do not want the upgraded nodes to be restarted, use the replay
sub-command to retry the operation:
Execute
tiup cluster audit
to see the operation records:tiup cluster audit
Find the failed upgrade operation record and keep the ID of this operation record. The ID is the
<audit-id>
value in the next step.Execute
tiup cluster replay <audit-id>
to retry the corresponding operation:tiup cluster replay <audit-id>
The evict leader has waited too long during the upgrade. How to skip this step for a quick upgrade?
You can specify --force
. Then the processes of transferring PD leader and evicting TiKV leader are skipped during the upgrade. The cluster is directly restarted to update the version, which has a great impact on the cluster that runs online. Here is the command:
tiup cluster upgrade <cluster-name> <version> --force
How to update the version of tools such as pd-ctl after upgrading the TiDB cluster?
You can upgrade the tool version by using TiUP to install the ctl
component of the corresponding version:
tiup install ctl:v5.2.4
TiDB 5.2 compatibility changes
- See TiDB 5.2 Release Notes for the compatibility changes.
- Try to avoid creating a new clustered index table when you apply rolling updates to the clusters using TiDB Binlog.