TiDB 6.6.0 Release Notes
Release date: February 20, 2023
TiDB version: 6.6.0-DMR
Quick access: Quick start
In v6.6.0-DMR, the key new features and improvements are as follows:
Feature details
Scalability
Support Partitioned Raft KV storage engine (experimental) #11515 #12842 @busyjay @tonyxuqqi @tabokie @bufferflies @5kbpers @SpadeA-Tang @nolouch
Before TiDB v6.6.0, TiKV's Raft-based storage engine used a single RocksDB instance to store the data of all 'Regions' of the TiKV instance. To support larger clusters more stably, starting from TiDB v6.6.0, a new TiKV storage engine is introduced, which uses multiple RocksDB instances to store TiKV Region data, and the data of each Region is independently stored in a separate RocksDB instance. The new engine can better control the number and level of files in the RocksDB instance, achieve physical isolation of data operations between Regions, and support stably managing more data. You can see it as TiKV managing multiple RocksDB instances through partitioning, which is why the feature is named Partitioned-Raft-KV. The main advantage of this feature is better write performance, faster scaling, and larger volume of data supported with the same hardware. It can also support larger cluster scales.
Currently, this feature is experimental and not recommended for use in production environments.
For more information, see documentation.
Support the distributed parallel execution framework for DDL operations (experimental) #37125 @zimulala
In previous versions, only one TiDB instance in the entire TiDB cluster was allowed to handle schema change tasks as a DDL owner. To further improve DDL concurrency for large table's DDL operations, TiDB v6.6.0 introduces the distributed parallel execution framework for DDL, through which all TiDB instances in the cluster can concurrently execute the
StateWriteReorganizationphase of the same task to speed up DDL execution. This feature is controlled by the system variabletidb_ddl_distribute_reorgand is currently only supported forAdd Indexoperations.
Performance
Support a stable wake-up model for pessimistic lock queues #13298 @MyonKeminta
If an application encounters frequent single-point pessimistic lock conflicts, the existing wake-up mechanism cannot guarantee the time for transactions to acquire locks, which causes high long-tail latency and even lock acquisition timeout. Starting from v6.6.0, you can enable a stable wake-up model for pessimistic locks by setting the value of the system variable
tidb_pessimistic_txn_aggressive_lockingtoON. In this wake-up model, the wake-up sequence of a queue can be strictly controlled to avoid the waste of resources caused by invalid wake-ups. In scenarios with serious lock conflicts, the stable wake-up model can reduce long-tail latency and the P99 response time.Tests indicate this reduces tail latency 40-60%.
For more information, see documentation.
Batch aggregate data requests #39361 @cfzjywxk @you06
When TiDB sends a data request to TiKV, TiDB compiles the request into different sub-tasks according to the Region where the data is located, and each sub-task only processes the request of a single Region. When the data to be accessed is highly dispersed, even if the size of the data is not large, many sub-tasks will be generated, which in turn will generate many RPC requests and consume extra time. Starting from v6.6.0, TiDB supports partially merging data requests that are sent to the same TiKV instance, which reduces the number of sub-tasks and the overhead of RPC requests. In the case of high data dispersion and insufficient gRPC thread pool resources, batching requests can improve performance by more than 50%.
This feature is enabled by default. You can set the batch size of requests using the system variable
tidb_store_batch_size.Remove the limit on
LIMITclauses #40219 @fzzf678Starting from v6.6.0, TiDB plan cache supports caching execution plans with a variable as the
LIMITparameter, such asLIMIT ?orLIMIT 10, ?. This feature allows more SQL statements to benefit from plan cache, thus improving execution efficiency. Currently, for security considerations, TiDB can only cache execution plans with?not greater than 10000.For more information, see documentation.
TiFlash supports data exchange with compression #6620 @solotzg
To cooperate with multiple nodes for computing, the TiFlash engine needs to exchange data among different nodes. When the size of the data to be exchanged is very large, the performance of data exchange might affect the overall computing efficiency. In v6.6.0, the TiFlash engine introduces a compression mechanism to compress the data that needs to be exchanged when necessary, and then to perform the exchange, thereby improving the efficiency of data exchange.
For more information, see documentation.
TiFlash supports the Stale Read feature #4483 @hehechen
The Stale Read feature has been generally available (GA) since v5.1.1, which allows you to read historical data at a specific timestamp or within a specified time range. Stale read can reduce read latency and improve query performance by reading data from local TiKV replicas directly. Before v6.6.0, TiFlash does not support Stale Read. Even if a table has TiFlash replicas, Stale Read can only read its TiKV replicas.
Starting from v6.6.0, TiFlash supports the Stale Read feature. When you query the historical data of a table using the
AS OF TIMESTAMPsyntax or thetidb_read_stalenesssystem variable, if the table has a TiFlash replica, the optimizer now can choose to read the corresponding data from the TiFlash replica, thus further improving query performance.For more information, see documentation.
Support pushing down the
regexp_replacestring function to TiFlash #6115 @xzhangxian1008
Reliability
Support resource control based on resource groups (experimental) #38825 @nolouch @BornChanger @glorv @tiancaiamao @Connor1996 @JmPotato @hnes @CabinfeverB @HuSharp
Now you can create resource groups for a TiDB cluster, bind different database users to corresponding resource groups, and set quotas for each resource group according to actual needs. When the cluster resources are limited, all resources used by sessions in the same resource group will be limited to the quota. In this way, even if a resource group is over-consumed, the sessions in other resource groups are not affected. TiDB provides a built-in view of the actual usage of resources on Grafana dashboards, assisting you to allocate resources more rationally.
The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units.
With this feature, you can:
- Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources.
- Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources.
In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs.
In v6.6, you need to enable both TiDB's global variable
tidb_enable_resource_controland the TiKV configuration itemresource-control.enabledto enable resource control. Currently, the supported quota method is based on "Request Unit (RU)". RU is TiDB's unified abstraction unit for system resources such as CPU and IO.For more information, see documentation.
Binding historical execution plans is GA #39199 @fzzf678
In v6.5.0, TiDB extends the binding targets in the
CREATE [GLOBAL | SESSION] BINDINGstatements and supports creating bindings according to historical execution plans. In v6.6.0, this feature is GA. The selection of execution plans is not limited to the current TiDB node. Any historical execution plan generated by any TiDB node can be selected as the target of SQL binding, which further improves the feature usability.For more information, see documentation.
Add several optimizer hints #39964 @Reminiscent
TiDB adds several optimizer hints in v6.6.0 to control the execution plan selection of
LIMIToperations.ORDER_INDEX(): tells the optimizer to use the specified index, to keep the order of the index when reading data, and generates plans similar toLimit + IndexScan(keep order: true).NO_ORDER_INDEX(): tells the optimizer to use the specified index, not to keep the order of the index when reading data, and generates plans similar toTopN + IndexScan(keep order: false).
Continuously introducing optimizer hints provides users with more intervention methods, helps solve SQL performance issues, and improves the stability of overall performance.
Support dynamically managing the resource usage of DDL operations (experimental) #38025 @hawkingrei
TiDB v6.6.0 introduces resource management for DDL operations to reduce the impact of DDL changes on online applications by automatically controlling the CPU usage of these operations. This feature is effective only after the DDL distributed parallel execution framework is enabled.
Availability
Support configuring
SURVIVAL_PREFERENCEfor placement rules in SQL #38605 @nolouchSURVIVAL_PREFERENCESprovides data survival preference settings to increase the disaster survivability of data. By specifyingSURVIVAL_PREFERENCE, you can control the following:- For TiDB clusters deployed across cloud regions, when a cloud region fails, the specified databases or tables can survive in another cloud region.
- For TiDB clusters deployed in a single cloud region, when an availability zone fails, the specified databases or tables can survive in another availability zone.
For more information, see documentation.
Support rolling back DDL operations via the
FLASHBACK CLUSTER TO TIMESTAMPstatement #14045 @Defined2014 @JmPotatoThe
FLASHBACK CLUSTER TO TIMESTAMPstatement supports restoring the entire cluster to a specified point in time within the Garbage Collection (GC) lifetime. In TiDB v6.6.0, this feature adds support for rolling back DDL operations. This can be used to quickly undo a DML or DDL misoperation on a cluster, roll back a cluster within minutes, and roll back a cluster multiple times on the timeline to determine when specific data changes occurred.For more information, see documentation.
SQL
Support MySQL-compatible foreign key constraints (experimental) #18209 @crazycs520
TiDB v6.6.0 introduces the foreign key constraints feature, which is compatible with MySQL. This feature supports referencing within a table or between tables, constraints validation, and cascade operations. This feature helps to migrate applications to TiDB, maintain data consistency, improve data quality, and facilitate data modeling.
For more information, see documentation.
Support MySQL-compatible multi-valued indexes (experimental) #39592 @xiongjiwei @qw4990
TiDB introduces MySQL-compatible multi-valued indexes in v6.6.0. Filtering the values of an array in a JSON column is a common operation, but normal indexes cannot help speed up such an operation. Creating a multi-valued index on an array can greatly improve filtering performance. If an array in the JSON column has a multi-valued index, you can use the multi-valued index to filter the retrieval conditions with
MEMBER OF(),JSON_CONTAINS(),JSON_OVERLAPS()functions, thereby reducing much I/O consumption and improving operation speed.Introducing multi-valued indexes further enhances TiDB's support for the JSON data type and also improves TiDB's compatibility with MySQL 8.0.
For more information, see documentation.
DB operations
Support configuring read-only storage nodes for resource-consuming tasks @v01dstar
In production environments, some read-only operations might consume a large number of resources regularly and affect the performance of the entire cluster, such as backups and large-scale data reading and analysis. TiDB v6.6.0 supports configuring read-only storage nodes for resource-consuming read-only tasks to reduce the impact on the online application. Currently, TiDB, TiSpark, and BR support reading data from read-only storage nodes. You can configure read-only storage nodes according to steps and specify where data is read through the system variable
tidb_replica_read, the TiSpark configuration itemspark.tispark.replica_read, or the br command line argument--replica-read-label, to ensure the stability of cluster performance.For more information, see documentation.
Support dynamically modifying
store-io-pool-size#13964 @LykxSassinatorThe TiKV configuration item
raftstore.store-io-pool-sizespecifies the allowable number of threads that process Raft I/O tasks, which can be adjusted when tuning TiKV performance. Before v6.6.0, this configuration item cannot be modified dynamically. Starting from v6.6.0, you can modify this configuration without restarting the server, which means more flexible performance tuning.For more information, see documentation.
Support specifying the SQL script executed upon TiDB cluster initialization #35624 @morgo
When you start a TiDB cluster for the first time, you can specify the SQL script to be executed by configuring the command line parameter
--initialize-sql-file. You can use this feature when you need to perform such operations as modifying the value of a system variable, creating a user, or granting privileges.For more information, see documentation.
TiDB Data Migration (DM) integrates with TiDB Lightning's physical import mode for up to a 10x performance boost for full migration (experimental) @lance6716
In v6.6.0, DM full migration capability integrates with physical import mode of TiDB Lightning, which enables DM to improve the performance of full data migration by up to 10 times, greatly reducing the migration time in large data volume scenarios.
Before v6.6.0, for large data volume scenarios, you were required to configure physical import tasks in TiDB Lightning separately for fast full data migration, and then use DM for incremental data migration, which was a complex configuration. Starting from v6.6.0, you can migrate large data volumes without the need to configure TiDB Lightning tasks; one DM task can accomplish the migration.
For more information, see documentation.
TiDB Lightning adds a new configuration parameter
"header-schema-match"to address the issue of mismatched column names between the source file and the target table @dsdashunIn v6.6.0, TiDB Lightning adds a new profile parameter
"header-schema-match". The default value istrue, which means the first row of the source CSV file is treated as the column name, and consistent with that in the target table. If the field name in the CSV table header does not match the column name of the target table, you can set this configuration tofalse. TiDB Lightning will ignore the error and continue to import the data in the order of the columns in the target table.For more information, see documentation.
TiDB Lightning supports enabling compressed transfers when sending key-value pairs to TiKV #41163 @sleepymole
Starting from v6.6.0, TiDB Lightning supports compressing locally encoded and sorted key-value pairs for network transfer when sending them to TiKV, thus reducing the amount of data transferred over the network and lowering the network bandwidth overhead. In the earlier TiDB versions before this feature is supported, TiDB Lightning requires relatively high network bandwidth and incurs high traffic charges in case of large data volumes.
This feature is disabled by default. To enable it, you can set the
compress-kv-pairsconfiguration item of TiDB Lightning to"gzip"or"gz".For more information, see documentation.
The TiKV-CDC tool is now GA and supports subscribing to data changes of RawKV #48 @zeminzhou @haojinming @pingyu
TiKV-CDC is a CDC (Change Data Capture) tool for TiKV clusters. TiKV and PD can constitute a KV database when used without TiDB, which is called RawKV. TiKV-CDC supports subscribing to data changes of RawKV and replicating them to a downstream TiKV cluster in real time, thus enabling cross-cluster replication of RawKV.
For more information, see documentation.
TiCDC supports scaling out a single table on Kafka changefeeds and distributing the changefeed to multiple TiCDC nodes (experimental) #7720 @overvenus
Before v6.6.0, when a table in the upstream accepts a large amount of writes, the replication capability of this table cannot be scaled out, resulting in an increase in the replication latency. Starting from TiCDC v6.6.0. the changefeed of an upstream table can be distributed to multiple TiCDC nodes in a Kafka sink, which means the replication capability of a single table is scaled out.
For more information, see documentation.
GORM adds TiDB integration tests. Now TiDB is the default database supported by GORM. #6014 @Icemap
In v1.4.6, GORM MySQL driver adapts to the
AUTO_RANDOMattribute of TiDB #104In v1.4.6, GORM MySQL driver fixes the issue that when connecting to TiDB, the
Uniqueattribute of theUniquefield cannot be modified duringAutoMigrate#105GORM documentation mentions TiDB as the default database #638
For more information, see GORM documentation.
Observability
Support quickly creating SQL binding on TiDB Dashboard #781 @YiniXu9506
TiDB v6.6.0 supports creating SQL binding from statement history, which allows you to quickly bind a SQL statement to a specific plan on TiDB Dashboard.
By providing a user-friendly interface, this feature simplifies the process of binding plans in TiDB, reduces the operation complexity, and improves the efficiency and user experience of the plan binding process.
For more information, see documentation.
Add warning for caching execution plans @qw4990
When an execution plan cannot be cached, TiDB indicates the reason in warning to make diagnostics easier. For example:
mysql> PREPARE st FROM 'SELECT * FROM t WHERE a<?'; Query OK, 0 rows affected (0.00 sec) mysql> SET @a='1'; Query OK, 0 rows affected (0.00 sec) mysql> EXECUTE st USING @a; Empty set, 1 warning (0.01 sec) mysql> SHOW WARNINGS; +---------+------+----------------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------------+ | Warning | 1105 | skip plan-cache: '1' may be converted to INT | +---------+------+----------------------------------------------+In the preceding example, the optimizer converts a non-INT type to an INT type, and the execution plan might change with the change of the parameter, so TiDB does not cache the plan.
For more information, see documentation.
Add a
Warningsfield to the slow query log #39893 @time-and-fateTiDB v6.6.0 adds a
Warningsfield to the slow query log to help diagnose performance issues. This field records warnings generated during the execution of a slow query. You can also view the warnings on the slow query page of TiDB Dashboard.For more information, see documentation.
Automatically capture the generation of SQL execution plans #38779 @Yisaer
In the process of troubleshooting execution plan issues,
PLAN REPLAYERcan help preserve the scene and improve the efficiency of diagnosis. However, in some scenarios, the generation of some execution plans cannot be reproduced freely, which makes the diagnosis work more difficult.To address such issues, in TiDB v6.6.0,
PLAN REPLAYERextends the capability of automatic capture. With thePLAN REPLAYER CAPTUREcommand, you can register the target SQL statement in advance and also specify the target execution plan at the same time. When TiDB detects the SQL statement or the execution plan that matches the registered target, it automatically generates and packages thePLAN REPLAYERinformation. When the execution plan is unstable, this feature can improve diagnostic efficiency.To use this feature, set the value of
tidb_enable_plan_replayer_capturetoON.For more information, see documentation.
Support persisting statements summary (experimental) #40812 @mornyx
Before v6.6.0, statements summary data is kept in memory and would be lost upon a TiDB server restart. Starting from v6.6.0, TiDB supports enabling statements summary persistence, which allows historical data to be written to disks on a regular basis. In the meantime, the result of queries on system tables will derive from disks, instead of memory. After TiDB restarts, all historical data remains available.
For more information, see documentation.
Security
TiFlash supports automatic rotations of TLS certificates #5503 @ywqzzy
In v6.6.0, TiDB supports automatic rotations of TiFlash TLS certificates. For a TiDB cluster with encrypted data transmission between components enabled, when a TLS certificate of TiFlash expires and needs to be reissued with a new one, the new TiFlash TLS certificate can be automatically loaded without restarting the TiDB cluster. In addition, the rotation of a TLS certificate between components within a TiDB cluster does not affect the use of the TiDB cluster, which ensures high availability of the cluster.
For more information, see documentation.
TiDB Lightning supports accessing Amazon S3 data via AWS IAM role keys and session tokens #40750 @okJiang
Before v6.6.0, TiDB Lightning only supports accessing S3 data via AWS IAM user's access keys (each access key consists of an access key ID and a secret access key) so you cannot use a temporary session token to access S3 data. Starting from v6.6.0, TiDB Lightning supports accessing S3 data via AWS IAM role's access keys + session tokens as well to improve the data security.
For more information, see documentation.
Telemetry
- Starting from February 20, 2023, the telemetry feature is disabled by default in new versions of TiDB and TiDB Dashboard (including v6.6.0). If you upgrade from a previous version that uses the default telemetry configuration, the telemetry feature is disabled after the upgrade. For the specific versions, see TiDB Release Timeline.
- Starting from v1.11.3, the telemetry feature is disabled by default in newly deployed TiUP. If you upgrade from a previous version of TiUP to v1.11.3 or a later version, the telemetry feature keeps the same status as before the upgrade.
Compatibility changes
MySQL compatibility
Support MySQL-compatible foreign key constraints (experimental) #18209 @crazycs520
For more information, see the SQL section in this document and documentation.
Support the MySQL-compatible multi-valued indexes (experimental) #39592 @xiongjiwei @qw4990
For more information, see the SQL section in this document and documentation.
System variables
Configuration file parameters
Others
- Support dynamically modifying
store-io-pool-size. This facilitates more flexible TiKV performance tuning. - Remove the limit on
LIMITclauses, thus improving the execution performance. - Starting from v6.6.0, BR does not support restoring data to clusters earlier than v6.1.0.
- Starting from v6.6.0, TiDB no longer supports modifying column types on partitioned tables because of potential correctness issues.
Improvements
TiDB
- Improve the scheduling mechanism of TTL background cleaning tasks to allow the cleaning task of a single table to be split into several sub-tasks and scheduled to run on multiple TiDB nodes simultaneously #40361 @YangKeao
- Optimize the column name display of the result returned by running multi-statements after setting a non-default delimiter #39662 @mjonss
- Optimize the execution efficiency of statements after warning messages are generated #39702 @tiancaiamao
- Support distributed data backfill for
ADD INDEX(experimental) #37119 @zimulala - Support using
CURDATE()as the default value of a column #38356 @CbcWestwolf partial order prop push downnow supports the LIST-type partitioned tables #40273 @winoros- Add error messages for conflicts between optimizer hints and execution plan bindings #40910 @Reminiscent
- Optimize the plan cache strategy to avoid non-optimal plans when using plan cache in some scenarios #40312 #40218 #40280 #41136 #40686 @qw4990
- Clear expired region cache regularly to avoid memory leak and performance degradation #40461 @sticnarf
MODIFY COLUMNis not supported on partitioned tables #39915 @wjhuang2016- Disable renaming of columns that partition tables depend on #40150 @mjonss
- Refine the error message reported when a column that a partitioned table depends on is deleted #38739 @jiyfhust
- Add a mechanism that
FLASHBACK CLUSTERretries when it fails to check themin-resolved-ts#39836 @Defined2014
TiKV
- Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item
storage.block-cache.capacityis adjusted from 45% to 30%, and the default value ofregion-split-sizeis adjusted from96MiBadjusted to10GiB. When using raft-kv mode andenable-region-bucketistrue,region-split-sizeis adjusted to 1 GiB by default. #12842 @tonyxuqqi - Support priority scheduling in Raftstore asynchronous writes #13730 @Connor1996
- Support starting TiKV on a CPU with less than 1 core #13586 #13752 #14017 @andreid-db
- Optimize the new detection mechanism of Raftstore slow score and add
evict-slow-trend-scheduler#14131 @innerr - Force the block cache of RocksDB to be shared and no longer support setting the block cache separately according to CF #12936 @busyjay
- Optimize the default values of some parameters in partitioned-raft-kv mode: the default value of the TiKV configuration item
PD
- Support managing the global memory threshold to alleviate the OOM problem (experimental) #5827 @hnes
- Add the GC Tuner to alleviate the GC pressure (experimental) #5827 @hnes
- Add the
evict-slow-trend-schedulerscheduler to detect and schedule abnormal nodes #5808 @innerr - Add the keyspace manager to manage keyspace #5293 @AmoebaProtozoa
TiFlash
- Support an independent MVCC bitmap filter that decouples the MVCC filtering operations in the TiFlash data scanning process, which provides the foundation for future optimization of the data scanning process #6296 @JinheLin
- Reduce the memory usage of TiFlash by up to 30% when there is no query #6589 @hongyunyan
Tools
Backup & Restore (BR)
TiCDC
TiDB Data Migration (DM)
Optimize DM alert rules and content #7376 @D3Hunter
Previously, alerts similar to "DM_XXX_process_exits_with_error" were raised whenever a related error occurred. But some alerts are caused by idle database connections, which can be recovered after reconnecting. To reduce these kinds of alerts, DM divides errors into two types: automatically recoverable errors and unrecoverable errors:
- For an error that is automatically recoverable, DM reports the alert only if the error occurs more than 3 times within 2 minutes.
- For an error that is not automatically recoverable, DM maintains the original behavior and reports the alert immediately.
Optimize relay performance by adding the async/batch relay writer #4287 @GMHDBJD
TiDB Lightning
- Physical Import Mode supports keyspace #40531 @iosmanthus
- Support setting the maximum number of conflicts by
lightning.max-error#40743 @dsdashun - Support importing CSV data files with BOM headers #40744 @dsdashun
- Optimize the processing logic when encountering TiKV flow-limiting errors and try other available regions instead #40205 @lance6716
- Disable checking the table foreign keys during import #40027 @sleepymole
Dumpling
sync-diff-inspector
- Add a new parameter
skip-non-existing-tableto control whether to skip checking upstream and downstream data consistency when tables in the downstream do not exist in the upstream #692 @lichunzhu @liumengya94
- Add a new parameter
Bug fixes
TiDB
- Fix the issue that a statistics collection task fails due to an incorrect
datetimevalue #39336 @xuyifangreeneyes - Fix the issue that
stats_metais not created following table creation #38189 @xuyifangreeneyes - Fix frequent write conflicts in transactions when performing DDL data backfill #24427 @mjonss
- Fix the issue that sometimes an index cannot be created for an empty table using ingest mode #39641 @tangenta
- Fix the issue that
wait_tsin the slow query log is the same for different SQL statements within the same transaction #39713 @TonsnakeLin - Fix the issue that the
Assertion Failederror is reported when adding a column during the process of deleting a row record #39570 @wjhuang2016 - Fix the issue that the
not a DDL ownererror is reported when modifying a column type #39643 @zimulala - Fix the issue that no error is reported when inserting a row after exhaustion of the auto-increment values of the
AUTO_INCREMENTcolumn #38950 @Dousir9 - Fix the issue that the
Unknown columnerror is reported when creating an expression index #39784 @Defined2014 - Fix the issue that data cannot be inserted into a renamed table when the generated expression includes the name of this table #39826 @Defined2014
- Fix the issue that the
INSERT ignorestatement cannot fill in default values when the column is write-only #40192 @YangKeao - Fix the issue that resources are not released when disabling the resource management module #40546 @zimulala
- Fix the issue that TTL tasks cannot trigger statistics updates in time #40109 @YangKeao
- Fix the issue that unexpected data is read because TiDB improperly handles
NULLvalues when constructing key ranges #40158 @tiancaiamao - Fix the issue that invalid values are written to a table when the
MODIFY COLUMNstatement also changes the default value of a column #40164 @wjhuang2016 - Fix the issue that the adding index operation is inefficient due to invalid Region cache when there are many Regions in a table #38436 @tangenta
- Fix data race occurred in allocating auto-increment IDs #40584 @Dousir9
- Fix the issue that the implementation of the not operator in JSON is incompatible with the implementation in MySQL #40683 @YangKeao
- Fix the issue that concurrent view might cause DDL operations to be blocked #40352 @zeminzhou
- Fix data inconsistency caused by concurrently executing DDL statements to modify columns of partitioned tables #40620 @mjonss @mjonss
- Fix the issue that "Malformed packet" is reported when using
caching_sha2_passwordfor authentication without specifying a password #40831 @dveeden - Fix the issue that a TTL task fails if the primary key of the table contains an
ENUMcolumn #40456 @lcwangchao - Fix the issue that some DDL operations blocked by MDL cannot be queried in
mysql.tidb_mdl_view#40838 @YangKeao - Fix the issue that data race might occur during DDL ingestion #40970 @tangenta
- Fix the issue that TTL tasks might delete some data incorrectly after the time zone changes #41043 @lcwangchao
- Fix the issue that
JSON_OBJECTmight report an error in some cases #39806 @YangKeao - Fix the issue that TiDB might deadlock during initialization #40408 @Defined2014
- Fix the issue that the value of system variables might be incorrectly modified in some cases due to memory reuse #40979 @lcwangchao
- Fix the issue that data might be inconsistent with the index when a unique index is created in the ingest mode #40464 @tangenta
- Fix the issue that some truncate operations cannot be blocked by MDL when truncating the same table concurrently #40484 @wjhuang2016
- Fix the issue that the
SHOW PRIVILEGESstatement returns an incomplete privilege list #40591 @CbcWestwolf - Fix the issue that TiDB panics when adding a unique index #40592 @tangenta
- Fix the issue that executing the
ADMIN RECOVERstatement might cause the index data to be corrupted #40430 @xiongjiwei - Fix the issue that a query might fail when the queried table contains a
CASTexpression in the expression index #40130 @xiongjiwei - Fix the issue that a unique index might still produce duplicate data in some cases #40217 @tangenta
- Fix the PD OOM issue when there is a large number of Regions but the table ID cannot be pushed down when querying some virtual tables using
PrepareorExecute#39605 @djshow832 - Fix the issue that data race might occur when an index is added #40879 @tangenta
- Fix the
can't find proper physical planissue caused by virtual columns #41014 @AilinKid - Fix the issue that TiDB cannot restart after global bindings are created for partition tables in dynamic trimming mode #40368 @Yisaer
- Fix the issue that
auto analyzecauses graceful shutdown to take a long time #40038 @xuyifangreeneyes - Fix the panic of the TiDB server when the IndexMerge operator triggers memory limiting behaviors #41036 @guo-shaoge
- Fix the issue that the
SELECT * FROM table_name LIMIT 1query on partitioned tables is slow #40741 @solotzg
- Fix the issue that a statistics collection task fails due to an incorrect
TiKV
- Fix an error that occurs when casting the
const Enumtype to other types #14156 @wshwsh12 - Fix the issue that Resolved TS causes higher network traffic #14092 @overvenus
- Fix the data inconsistency issue caused by network failure between TiDB and TiKV during the execution of a DML after a failed pessimistic DML #14038 @MyonKeminta
- Fix an error that occurs when casting the
PD
- Fix the issue that the Region Scatter task generates redundant replicas unexpectedly #5909 @HundunDM
- Fix the issue that the Online Unsafe Recovery feature would get stuck and time out in
auto-detectmode #5753 @Connor1996 - Fix the issue that the execution
replace-down-peerslows down under certain conditions #5788 @HundunDM - Fix the PD OOM issue that occurs when the calls of
ReportMinResolvedTSare too frequent #5965 @HundunDM
TiFlash
- Fix the issue that querying TiFlash-related system tables might get stuck #6745 @lidezhu
- Fix the issue that semi-joins use excessive memory when calculating Cartesian products #6730 @gengliqi
- Fix the issue that the result of the division operation on the DECIMAL data type is not rounded #6393 @LittleFall
- Fix the issue that
start_tscannot uniquely identify an MPP query in TiFlash queries, which might cause an MPP query to be incorrectly canceled #43426 @hehechen
Tools
Backup & Restore (BR)
- Fix the issue that when restoring log backup, hot Regions cause the restore to fail #37207 @Leavrth
- Fix the issue that restoring data to a cluster on which the log backup is running causes the log backup file to be unrecoverable #40797 @Leavrth
- Fix the issue that the PITR feature does not support CA-bundles #38775 @YuJuncen
- Fix the panic issue caused by duplicate temporary tables during recovery #40797 @joccau
- Fix the issue that PITR does not support configuration changes for PD clusters #14165 @YuJuncen
- Fix the issue that the connection failure between PD and tidb-server causes PITR backup progress not to advance #41082 @YuJuncen
- Fix the issue that TiKV cannot listen to PITR tasks due to the connection failure between PD and TiKV #14159 @YuJuncen
- Fix the issue that the frequency of
resolve lockis too high when there is no PITR backup task in the TiDB cluster #40759 @joccau - Fix the issue that when a PITR backup task is deleted, the residual backup data causes data inconsistency in new tasks #40403 @joccau
TiCDC
- Fix the issue that
transaction_atomicityandprotocolcannot be updated via the configuration file #7935 @CharlesCheung96 - Fix the issue that precheck is not performed on the storage path of redo log #6335 @CharlesCheung96
- Fix the issue of insufficient duration that redo log can tolerate for S3 storage failure #8089 @CharlesCheung96
- Fix the issue that changefeed might get stuck in special scenarios such as when scaling in or scaling out TiKV or TiCDC nodes #8174 @hicqu
- Fix the issue of too high traffic among TiKV nodes #14092 @overvenus
- Fix the performance issues of TiCDC in terms of CPU usage, memory control, and throughput when the pull-based sink is enabled #8142 #8157 #8001 #5928 @hicqu @hi-rustin
- Fix the issue that
TiDB Data Migration (DM)
- Fix the issue that the
binlog-schema deletecommand fails to execute #7373 @liumengya94 - Fix the issue that the checkpoint does not advance when the last binlog is a skipped DDL #8175 @D3Hunter
- Fix a bug that when the expression filters of both "update" and "non-update" types are specified in one table, all
UPDATEstatements are skipped #7831 @lance6716 - Fix a bug that when only one of
update-old-value-exprorupdate-new-value-expris set for a table, the filter rule does not take effect or DM panics #7774 @lance6716
- Fix the issue that the
TiDB Lightning
- Fix the issue that TiDB Lightning timeout hangs due to TiDB restart in some scenarios #33714 @lichunzhu
- Fix the issue that TiDB Lightning might incorrectly skip conflict resolution when all but the last TiDB Lightning instance encounters a local duplicate record during a parallel import #40923 @lichunzhu
- Fix the issue that precheck cannot accurately detect the presence of a running TiCDC in the target cluster #41040 @lance6716
- Fix the issue that TiDB Lightning panics in the split-region phase #40934 @lance6716
- Fix the issue that the conflict resolution logic (
duplicate-resolution) might lead to inconsistent checksums #40657 @sleepymole - Fix a possible OOM problem when there is an unclosed delimiter in the data file #40400 @buchuitoudegou
- Fix the issue that the file offset in the error report exceeds the file size #40034 @buchuitoudegou
- Fix an issue with the new version of PDClient that might cause parallel import to fail #40493 @AmoebaProtozoa
- Fix the issue that TiDB Lightning prechecks cannot find dirty data left by previously failed imports #39477 @dsdashun
Contributors
We would like to thank the following contributors from the TiDB community: