Release date: November 30, 2021
TiDB version: 5.3.0
In v5.3, the key new features or improvements are as follows:
- Introduce temporary tables to simplify your application logic and improve performance
- Support setting attributes for tables and partitions
- Support creating users with the least privileges on TiDB Dashboard to enhance system security
- Optimize the timestamp processing flow in TiDB to improve the overall performance
- Enhance the performance of TiDB Data Migration (DM) so that data is migrated from MySQL to TiDB with lower latency
- Support parallel import using multiple TiDB Lightning instances to improve the efficiency of full data migration
- Support saving and restoring the on-site information of a cluster with a single SQL statement, which helps improve the efficiency of troubleshooting issues relating to execution plans
- Support the continuous profiling experimental feature to improve the observability of database performance
- Continue optimizing the storage and computing engines to improve the system performance and stability
- Reduce the write latency of TiKV by separating I/O operations from Raftstore thread pool (disabled by default)
|Variable name||Change type||Description|
|Modified||Temporary tables are now supported by TiDB so |
|Newly added||Controls the behavior of the optimizer when the statistics on a table expire. The default value is |
|Newly added||Determines whether to enable or disable the TSO Follower Proxy feature. The default value is |
|Newly added||Sets the maximum waiting time for a batch saving operation when TiDB requests TSO from PD. The default value is |
|Newly added||Limits the maximum size of a single temporary table. If the temporary table exceeds this size, an error will occur.|
|Configuration file||Configuration item||Change type||Description|
|TiDB||Modified||Controls the number of cached statements. The default value is changed from |
|TiKV||Modified||Controls space reserved for disk protection when TiKV is started. Starting from v5.3.0, 80% of the reserved space is used as the extra disk space required for operations and maintenance when the disk space is insufficient, and the other 20% is used to store temporary files.|
|TiKV||Modified||This configuration item is new in TiDB v5.3.0 and its value is calculated based on storage.block-cache.capacity.|
|TiKV||Newly added||The allowable number of threads that process Raft I/O tasks, which is the size of the StoreWriter thread pool. When you modify the size of this thread pool, refer to Performance tuning for TiKV thread pools.|
|TiKV||Newly added||Determines the threshold at which Raft data is written into the disk. If the data size is larger than the value of this configuration item, the data is written to the disk. When the value of |
|TiKV||Newly added||Determines the interval at which Raft messages are sent in batches. The Raft messages in batches are sent at every interval specified by this configuration item. When the value of |
|TiKV||Deleted||Determines the smallest duration that a Leader is transferred to a newly added node.|
|PD||Modified||Controls the maximum number of days that logs are retained for. The default value is changed from |
|PD||Modified||Controls the maximum number of logs that are retained for. The default value is changed from |
|PD||Modified||Controls the running frequency at which replicaChecker checks the health state of a Region. The smaller this value is, the faster replicaChecker runs. Normally, you do not need to adjust this parameter. The default value is changed from |
|PD||Modified||Controls the maximum number of snapshots that a single store receives or sends at the same time. PD schedulers depend on this configuration to prevent the resources used for normal traffic from being preempted. The default value is changed from |
|PD||Modified||Controls the maximum number of pending peers in a single store. PD schedulers depend on this configuration to prevent too many Regions with outdated logs from being generated on some nodes. The default value is changed from |
|TiD Lightning||Newly added||The schema name where the meta information for each TiDB Lightning instance is stored in the target cluster. The default value is "lightning_metadata".|
If you have created local temporary tables in a TiDB cluster earlier than v5.3.0, these tables are actually ordinary tables, and handled as ordinary tables after the cluster is upgraded to v5.3.0 or a later version. If you have created global temporary tables in a TiDB cluster of v5.3.0 or a later version, when the cluster is downgraded to a version earlier than v5.3.0, these tables are handled as ordinary tables and cause a data error.
Since v5.3.0, TiCDC and BR support global temporary tables. If you use TiCDC and BR of a version earlier than v5.3.0 to replicate global temporary tables to the downstream, a table definition error occurs.
The following clusters are expected to be v5.3.0 or later; otherwise, data error is reported when you create a global temporary table:
- the cluster to be imported using TiDB migration tools
- the cluster restored using TiDB migration tools
- the downstream cluster in a replication task using TiDB migration tools
For the compatibility information of temporary tables, refer to Compatibility with MySQL temporary tables and Compatibility restrictions with other TiDB features.
For releases earlier than v5.3.0, TiDB reports an error when a system variable is set to an illegal value. For v5.3.0 and later releases, TiDB returns success with a warning such as "|Warning | 1292 | Truncated incorrect xxx: 'xx'" when a system variable is set to an illegal value.
Fix the issue that the
SHOW VIEWpermission is not required to execute
SHOW CREATE VIEW. Now you are expected to have the
SHOW VIEWpermission to execute the
SHOW CREATE VIEWstatement.
The system variable
sql_auto_is_nullis added to the noop functions. When
tidb_enable_noop_functions = 0/OFF, modifying this variable value causes an error.
GRANT ALL ON performance_schema.*syntax is no longer permitted. If you execute this statement in TiDB, an error occurs.
Fix the issue that auto-analyze is unexpectedly triggered outside the specified time period when new indexes are added before v5.3.0. In v5.3.0, after you set the time period through the
tidb_auto_analyze_end_timevariables, auto-analyze is triggered only during this time period.
The default storage directory for plugins is changed from
The DM code is migrated to the folder "dm" in TiCDC code repository. Now DM follows TiDB in version numbers. Next to v2.0.x, the new DM version is v5.3.0, and you can upgrade from v2.0.x to v5.3.0 without any risk.
The default deployed version of Prometheus is upgraded from v2.8.1 to v2.27.1, which is released in May 2021. This version provides more features and fixes a security issue. Compared with Prometheus v2.8.1, alert time representation in v2.27.1 is changed from Unix timestamp to UTC. For details, refer to Prometheus commit for more details.
Use SQL interface to set placement rules for data (experimental feature)
[CREATE | ALTER] PLACEMENT POLICYsyntax that provides a SQL interface to set placement rules for data. Using this feature, you can specify tables and partitions to be scheduled to specific regions, data centers, racks, hosts, or replica count rules. This meets your application demands for lower cost and higher flexibility. The typical user scenarios are as follows:
Merge multiple databases of different applications to reduce the cost on database maintenance, and achieve application resource isolation through the rule configuration
Increase replica count for important data to improve the application availability and data reliability
Store new data into SSDs and store old data into HHDs to lower the cost on data archiving and storage
Schedule the leaders of hotspot data to high-performance TiKV instances
Separate cold data to lower-cost storage mediums to improve cost efficiency
CREATE [GLOBAL] TEMPORARY TABLEstatement to create temporary tables. Using this feature, you can easily manage the temporary data generated in the calculation process of an application. Temporary data is stored in memory and you can use the
tidb_tmp_table_max_sizevariable to limit the size of a temporary table. TiDB supports the following types of temporary tables:
Global temporary tables
- Visible to all sessions in the cluster, and table schemas are persistent.
- Provides transaction-level data isolation. The temporary data is effective only in the transaction. After the transaction finishes, the data is automatically dropped.
Local temporary tables
Visible only to the current session, and tables schemas are not persistent.
Supports duplicated table names. You do not need to design complicated naming rules for your application.
Provides session-level data isolation, which enables you to design a simpler application logic. After the transaction finishes, the temporary tables are dropped.
FOR UPDATE OF TABLESsyntax
For a SQL statement that joins multiple tables, TiDB supports acquiring pessimistic locks on the rows correlated to the tables that are included in
ALTER TABLE [PARTITION] ATTRIBUTESstatement that allows you to set attributes for a table or partition. Currently, TiDB only supports setting the
merge_optionattribute. By adding this attribute, you can explicitly control the Region merge behavior.
User scenarios: When you perform the
SPLIT TABLEoperation, if no data is inserted after a certain period of time (controlled by the PD parameter
split-merge-interval), the empty Regions are automatically merged by default. In this case, you can set the table attribute to
merge_option=denyto avoid the automatic merging of Regions.
Support creating users with the least privileges on TiDB Dashboard
The account system of TiDB Dashboard is consistent with that of TiDB SQL. Users accessing TiDB Dashboard are authenticated and authorized based on TiDB SQL users' privileges. Therefore, TiDB Dashboard requires limited privileges, or merely the read-only privilege. You can configure users to access TiDB Dashboard based on the principle of least privilege, thus avoiding access of high-privileged users.
It is recommended that you create a least-privileged SQL user to access and sign in with TiDB Dashboard. This avoids access of high-privileged users and improves security.
Optimize the timestamp processing flow of PD
TiDB optimizes its timestamp processing flow and reduces the timestamp processing load of PD by enabling PD Follower Proxy and modifying the batch waiting time required when the PD client requests TSO in batches. This helps improve the overall scalability of the system.
Support enabling or disabling PD Follower Proxy through the system variable
tidb_enable_tso_follower_proxy. Suppose that the TSO requests load of PD is too high. In this case, enabling PD follower proxy can batch forward the TSO requests collected during the request cycle on followers to the leader nodes. This solution can effectively reduce the number of direct interactions between clients and leaders, reduce the pressure of the load on leaders, and improve the overall performance of TiDB.
Support using the system variable
tidb_tso_client_batch_max_wait_timeto set the maximum waiting time needed for the PD client to batch request TSO. The unit of this time is milliseconds. In case that PD has a high TSO requests load, you can reduce the load and improve the throughput by increasing the waiting time to get a larger batch size.
Support Online Unsafe Recovery after some stores are permanently damaged (experimental feature)
pd-ctl unsafe remove-failed-storescommand that performs online data unsafe recovery. Suppose that the majority of data replicas encounter issues like permanent damage (such as disk damage), and these issues cause the data ranges in an application to be unreadable or unwritable. In this case, you can use the Online Unsafe Recovery feature implemented in PD to recover the data, so that the data is readable or writable again.
It is recommended to perform the feature-related operations with the support of the TiDB team.
DM replication performance enhanced
Supports the following features to ensure lower-latency data replication from MySQL to TiDB:
- Compact multiple updates on a single row into one statement
- Merge batch updates of multiple rows into one statement
Add DM OpenAPI to better maintain DM clusters (experimental feature)
DM provides the OpenAPI feature for querying and operating the DM cluster. It is similar to the feature of dmctl tools.
Currently, DM OpenAPI is an experimental feature and disabled by default. It is not recommended to use it in a production environment.
TiDB Lightning Parallel Import
TiDB Lightning provides parallel import capability to extend the original feature. It allows you to deploy multiple Lightning instances at the same time to import single tables or multiple tables to downstream TiDB in parallel. Without changing the way customers use it, it greatly improves the data migration ability, allowing you to migrate data in a more real-time way to further process, integrate and analyze them. It improves the efficiency of enterprise data management.
In our test, using 10 TiDB Lightning instances, a total of 20 TiB MySQL data can be imported to TiDB within 8 hours. The performance of multiple table import is also improved. A single TiDB Lightning instance can support importing at 250 GiB/h, and the overall migration is 8 times faster than the original performance.
TiDB Lightning Prechecks
TiDB Lightning provides the ability to check the configuration before running a migration task. It is enabled by default. This feature automatically performs some routine checks for disk space and execution configuration. The main purpose is to ensure that the whole subsequent import process goes smoothly.
TiDB Lightning supports importing files of GBK character set
You can specify the character set of the source data file. TiDB Lightning will convert the source file from the specified character set to UTF-8 encoding during the import process.
Improve the comparison speed from 375 MB/s to 700 MB/s
Reduce the memory consumption of TiDB nodes by nearly half during comparison
Optimize the user interface and display the progress bar during comparison
Save and restore the on-site information of a cluster
When you locate and troubleshoot the issues of a TiDB cluster, you often need to provide information on the system and the query plan. To help you get the information and troubleshoot cluster issues in a more convenient and efficient way, the
PLAN REPLAYERcommand is introduced in TiDB v5.3.0. This command enables you to easily save and restore the on-site information of a cluster, improves the efficiency of troubleshooting, and helps you more easily archive the issues for management.
The features of
PLAN REPLAYERare as follows:
Exports the information of a TiDB cluster at an on-site troubleshooting to a ZIP-formatted file for storage.
Imports into a cluster the ZIP-formatted file exported from another TiDB cluster. This file contains the information of the latter TiDB cluster at an on-site troubleshooting.
TiCDC Eventually Consistent Replication
TiCDC provides the eventually consistent replication capability in disaster scenarios. When a disaster occurs in the primary TiDB cluster and the service cannot be resumed in a short period of time, TiCDC needs to provide the ability to ensure the consistency of data in the secondary cluster. Meanwhile, TiCDC needs to allow the business to quickly switch the traffic to the secondary cluster to avoid the database being unavailable for a long time and affecting the business.
This feature supports TiCDC to replicate incremental data from a TiDB cluster to the secondary relational database TiDB/Aurora/MySQL/MariaDB. In case the primary cluster crashes, TiCDC can recover the secondary cluster to a certain snapshot in the primary cluster within 5 minutes, given the condition that before disaster the replication status of TiCDC is normal and replication lag is small. It allows data loss of less than 30 minutes, that is, RTO <= 5min, and RPO <= 30min.
TiCDC supports the HTTP protocol OpenAPI for managing TiCDC tasks
Since TiDB v5.3.0, TiCDC OpenAPI becomes a General Availability (GA) feature. You can query and operate TiCDC clusters using OpenAPI in the production environment.
Continuous Profiling (experimental feature)
TiDB Dashboard supports the Continuous Profiling feature, which stores instance performance analysis results automatically in real time when TiDB clusters are running. You can check the performance analysis result in a flame graph, which is more observable and shortens troubleshooting time.
This feature is disabled by default and needs to be enabled on the Continuous Profile page of TiDB Dashboard.
This feature is only available for clusters upgraded or installed using TiUP v1.7.0 or above.
TiDB adds the information to the telemetry report about whether or not the TEMPORARY TABLE feature is used. This does not include table names or table data.
To learn more about telemetry and how to disable this behavior, refer to Telemetry.
Starting from TiCDC v5.3.0, the cyclic replication feature between TiDB clusters (an experimental feature in v5.0.0) has been removed. If you have already used this feature to replicate data before upgrading TiCDC, the related data is not affected after the upgrade.
- Show the affected SQL statements in the debug log when the coprocessor encounters a lock, which is helpful in diagnosing problems #27718
- Support showing the size of the backup and restore data when backing up and restoring data in the SQL logical layer #27247
- Improve the default collection logic of ANALYZE when
2, which accelerates collection and reduces resource overhead
- Introduce the
ANALYZE TABLE table_name COLUMNS col_1, col_2, ... , col_nsyntax. The syntax allows collecting statistics only on a portion of the columns in wide tables, which improves the speed of statistics collection
Enhance disk space protection to improve storage stability
To solve the issue that TiKV might panic in case of a disk fully-written error, TiKV introduces a two-level threshold defense mechanism to protect the disk remaining space from being exhausted by excess traffic. Additionally, the mechanism provides the ability to reclaim space when the threshold is triggered. When the remaining space threshold is triggered, some write operations will fail and TiKV will return a disk full error as well as a list of disk full nodes. In this case, to recover the space and restore the service, you can execute
Drop/Truncate Tableor scale out the nodes.
Simplify the algorithm of L0 flow control #10879
Improve the error log report in the raft client module #10944
Improve logging threads to avoid them becoming a performance bottleneck #10841
Add more statistics types of write queries #10507
- Add more types of write queries to QPS dimensions in the hotspot scheduler #3869
- Support dynamically adjusting the retry limit of the Balance Region scheduler to improve the performance of the scheduler #3744
- Update TiDB Dashboard to v2021.10.08.1 #4070
- Support that the evict leader scheduler can schedule Regions with unhealthy peers #4093
- Speed up the exit process of schedulers #4146
Improve the execution efficiency of the TableScan operator greatly
Improve the execution efficiency of the Exchange operator
Reduce write amplification and memory usage during GC of the storage engine (experimental feature)
Improve the stability and availability of TiFlash when TiFlash restarts, which reduces possible query failures following the restart
Support pushing down multiple new String and Time functions to the MPP engine
- String functions: LIKE pattern, FORMAT(), LOWER(), LTRIM(), RTRIM(), SUBSTRING_INDEX(), TRIM(), UCASE(), UPPER()
- Mathematical functions: ROUND (decimal, int)
- Date and time functions: HOUR(), MICROSECOND(), MINUTE(), SECOND(), SYSDATE()
- Type conversion function: CAST(time, real)
- Aggregation functions: GROUP_CONCAT(), SUM(enum)
Support 512-bit SIMD
Enhance the cleanup algorithm for outdated data to reduce disk usage and read files more efficiently
Fix the issue that dashboard does not display memory or CPU information in some non-Linux systems
Unify the naming style of TiFlash log files (keep the naming style consistent with that of TiKV) and support dynamic modification of logger.count and logger.size
Improve the data validation capability of column-based files (checksums, experimental feature)
- Reduce the default value of the Kafka sink configuration item
MaxMessageBytesfrom 64 MB to 1 MB to fix the issue that large messages are rejected by the Kafka Broker #3104
- Reduce memory usage in the replication pipeline #2553#3037 #2726
- Optimize monitoring items and alert rules to improve observability of synchronous links, memory GC, and stock data scanning processes #2735 #1606 #3000 #2985 #2156
- When the sync task status is normal, no more historical error messages are displayed to avoid misleading users #2242
- Reduce the default value of the Kafka sink configuration item
- Fix an error that occurs during execution caused by the wrong execution plan. The wrong execution plan is caused by the shallow copy of schema columns when pushing down the aggregation operators on partitioned tables #27797 #26554
- Fix the issue that
plan cachecannot detect changes of unsigned flags #28254
- Fix the wrong partition pruning when the partition function is out of range #28233
- Fix the issue that planner might cache invalid plans for
joinin some cases #28087
- Fix wrong
IndexLookUpJoinwhen hash column type is
- Fix a batch client bug that recycling idle connection might block sending requests in some rare cases #27688
- Fix the TiDB Lightning panic issue when it fails to perform checksum on a target cluster #27686
- Fix wrong results of the
date_subfunctions in some cases #27232
- Fix wrong results of the
hourfunction in vectorized expression #28643
- Fix the authenticating issue when connecting to MySQL 5.1 or an older client version #27855
- Fix the issue that auto analyze might be triggered out of the specified time when a new index is added #28698
- Fix a bug that setting any session variable invalidates
- Fix a bug that BR is not working for clusters with many missing-peer Regions #27534
- Fix the unexpected error like
tidb_cast to Int32 is not supportedwhen the unsupported
castis pushed down to TiFlash #23907
- Fix the issue that
DECIMAL overflowis missing in the
%s value is out of range in '%s'error message #27964
- Fix a bug that the availability detection of MPP node does not work in some corner cases #3118
- Fix the
DATA RACEissue when assigning
MPP task ID#27952
- Fix the
INDEX OUT OF RANGEerror for a MPP query after deleting an empty
- Fix the issue of false positive error log
invalid cop task execution summaries lengthfor MPP queries #1791
- Fix the issue of error log
cannot found column in Schema columnfor MPP queries #28149
- Fix the issue that TiDB might panic when TiFlash is shuting down #28096
- Remove the support for insecure 3DES (Triple Data Encryption Algorithm) based TLS cipher suites #27859
- Fix the issue that Lightning connects to offline TiKV nodes during pre-check and causes import failures #27826
- Fix the issue that pre-check cost too much time when importing many files to tables #27605
- Fix the issue that rewriting expressions makes
betweeninfer wrong collation #27146
- Fix the issue that
group_concatfunction did not consider the collation #27429
- Fix the result wrong that occurs when the argument of the
extractfunction is a negative duration #27236
- Fix the issue that creating partition fails if
NO_UNSIGNED_SUBTRACTIONis set #26765
- Avoid expressions with side effects in column pruning and aggregation pushdown #27106
- Remove useless gRPC logs #24190
- Limit the valid decimal length to fix precision-related issues #3091
- Fix the issue of a wrong way to check for overflow in
- Fix the issue of
data too longerror when dumping statistics from the table with
new collationdata #27024
- Fix the issue that the retried transactions' statements are not included in
- Fix the wrong default value of the
- Fix the issue that the
NULLwhen it is given a named timezone and a UTC offset #8311
- Fix the issue that
CREATE SCHEMAdoes not use the character set specified by
collation_serverfor new schemas if none are provided as part of the statement #27214
- Fix the issue of unavailable TiKV caused by Raftstore deadlock when migrating Regions. The workaround is to disable the scheduling and restart the unavailable TiKV #10909
- Fix the issue that CDC adds scan retries frequently due to the Congest error #11082
- Fix the issue that the Raft connection is broken when the channel is full #11047
- Fix the issue that batch messages are too large in Raft client implementation #9714
- Fix the issue that some coroutines leak in
- Fix a panic issue that occurs to the coprocessor when the size of response exceeds 4 GiB #9012
- Fix the issue that snapshot Garbage Collection (GC) misses GC snapshot files when snapshot files cannot be garbage collected #10813
- Fix a panic issue caused by timeout when processing Coprocessor requests #10852
- Fix a memory leak caused by monitoring data of statistics threads #11195
- Fix a panic issue caused by getting the cgroup information from some platforms #10980
- Fix the issue of poor scan performance because MVCC Deletion versions are not dropped by compaction filter GC #11248
- Fix the issue that PD incorrectly delete the peers with data and in pending status because the number of peers exceeds the number of configured peers #4045
- Fix the issue that PD does not fix down peers in time #4077
- Fix the issue that the scatter range scheduler cannot schedule empty Regions #4118
- Fix the issue that the key manager cost too much CPU #4071
- Fix the data race issue that might occur when setting configurations of hot Region scheduler #4159
- Fix slow leader election caused by stucked Region syncer #3936
- Fix the issue of inaccurate TiFlash Store Size statistics
- Fix the issue that TiFlash fails to start up on some platforms due to the absence of library
- Block the infinite wait of
wait indexwhen writing pressure is heavy (a default timeout of 5 minutes is added), which prevents TiFlash from waiting too long for data replication to provide services
- Fix the slow and no result issues of the log search when the log volume is large
- Fix the issue that only the most recent logs can be searched when searching old historical logs
- Fix the possible wrong result when a new collation is enabled
- Fix the possible parsing errors when an SQL statement contains extremely long nested expressions
- Fix the possible
Block schema mismatcherror of the Exchange operator
- Fix the possible
Can't compareerror when comparing Decimal types
- Fix the
3rd arguments of function substringUTF8 must be constantserror of the
- Fix the issue that TiCDC replication task might terminate when the upstream TiDB instance unexpectedly exits #3061
- Fix the issue that TiCDC process might panic when TiKV sends duplicate requests to the same Region #2386
- Fix unnecessary CPU consumption when verifying downstream TiDB/MySQL availability #3073
- Fix the issue that the volume of Kafka messages generated by TiCDC is not constrained by
- Fix the issue that TiCDC sync task might pause when an error occurs during writing a Kafka message #2978
- Fix the issue that some partitioned tables without valid indexes might be ignored when
force-replicateis enabled #2834
- Fix the issue that scanning stock data might fail due to TiKV performing GC when scanning stock data takes too long #2470
- Fix a possible panic issue when encoding some types of columns into Open Protocol format #2758
- Fix a possible panic issue when encoding some types of columns into Avro format #2648
- Fix the issue that when most tables are filtered out, checkpoint cannot be updated under some special load #1075