TiDB 8.2.0 Release Notes
Release date: July 11, 2024
TiDB version: 8.2.0
Quick access: Quick start
8.2.0 introduces the following key features and improvements:
Category | Feature/Enhancement | Description |
---|---|---|
Reliability and Availability | TiProxy supports multiple load balancing policies | In TiDB v8.2.0, TiProxy evaluates and ranks TiDB nodes based on various dimensions, such as status, connection counts, health, memory, CPU, and location. According to the load balancing policy specified in the policy configuration item, TiProxy dynamically selects the optimal TiDB node to execute database operations. This optimizes overall resource usage, improves cluster performance, and increases throughput. |
The parallel HashAgg algorithm of TiDB supports disk spill (GA) | HashAgg is a widely used aggregation operator in TiDB for efficiently aggregating rows with the same field values. TiDB v8.0.0 introduces parallel HashAgg as an experimental feature to further enhance processing speed. When memory resources are insufficient, parallel HashAgg spills temporary sorted data to disk, avoiding potential OOM risks caused by excessive memory usage. This improves query performance while maintaining node stability. In v8.2.0, this feature becomes generally available (GA) and is enabled by default, enabling you to safely configure the concurrency of parallel HashAgg using tidb_executor_concurrency . | |
Improve statistics loading efficiency by up to 10 times | For clusters with a large number of tables and partitions, such as SaaS or PaaS services, improvement in statistics loading efficiency can solve the problem of slow startup of TiDB instances, and increase the success rate of dynamic loading of statistics. This improvement reduces performance rollbacks caused by statistics loading failures and improves cluster stability. | |
DB Operations and Observability | Introduce privilege control of switching resource groups | As resource control is widely used, the privilege control of switching resource groups can prevent database users from abusing resources, strengthen administrators' protection of overall resource usage, and improve cluster stability. |
Feature details
Performance
Support pushing down the following JSON functions to TiKV #50601 @dbsid
JSON_ARRAY_APPEND()
JSON_MERGE_PATCH()
JSON_REPLACE()
For more information, see documentation.
TiDB supports parallel sorting #49217 #50746 @xzhangxian1008
Before v8.2.0, TiDB only executes Sort operators sequentially, affecting query performance when sorting large amounts of data.
Starting from v8.2.0, TiDB supports parallel sorting, which significantly improves sorting performance. This feature does not need manual configuration. TiDB automatically determines whether to use parallel sorting based on the value of the
tidb_executor_concurrency
system variable.For more information, see documentation.
The parallel HashAgg algorithm of TiDB supports disk spill (GA) #35637 @xzhangxian1008
TiDB v8.0.0 introduces the parallel HashAgg algorithm with disk spill support as an experimental feature. In v8.2.0, this feature becomes generally available (GA). When using the parallel HashAgg algorithm, TiDB automatically triggers data spill based on memory usage, thus balancing query performance and data throughput. This feature is enabled by default. The system variable
tidb_enable_parallel_hashagg_spill
, which controls this feature, will be deprecated in a future release.For more information, see documentation.
Reliability
Improve statistics loading efficiency by up to 10 times #52831 @hawkingrei
SaaS or PaaS applications can have a large number of data tables, which not only slow down the loading speed of the initial statistics, but also increase the failure rate of load synchronization under high loads. The startup time of TiDB and the accuracy of the execution plan can be affected. In v8.2.0, TiDB optimizes the process of loading statistics from multiple perspectives, such as the concurrency model and memory allocation, to reduce latency, improve throughput, and avoid slow loading of statistics that affect business scaling.
Adaptive concurrent loading is now supported. By default, the configuration item
stats-load-concurrency
is set to0
, and the concurrency of statistics loading is automatically selected based on the hardware specification.For more information, see documentation.
Availability
TiProxy supports multiple load balancing policies #465 @djshow832 @xhebox
TiProxy is the official proxy component of TiDB, located between the client and TiDB server. It provides load balancing and connection persistence functions for TiDB. Before v8.2.0, TiProxy defaults to v1.0.0, which only supports status-based and connection count-based load balancing policies for TiDB servers.
Starting from v8.2.0, TiProxy defaults to v1.1.0 and introduces multiple load balancing policies. In addition to status-based and connection count-based policies, TiProxy supports dynamic load balancing based on health, memory, CPU, and location, improving the stability of the TiDB cluster.
You can configure the combination and priority of load balancing policies through the
policy
configuration item.resource
: the resource priority policy performs load balancing based on the following priority order: status, health, memory, CPU, location, and connection count.location
: the location priority policy performs load balancing based on the following priority order: status, location, health, memory, CPU, and connection count.connection
: the minimum connection count priority policy performs load balancing based on the following priority order: status and connection count.
For more information, see documentation.
SQL
TiDB supports the JSON schema validation function #52779 @dveeden
Before v8.2.0, you need to rely on external tools or customized validation logic for JSON data validation, which increases the complexity of development and maintenance, and reduces development efficiency. Starting from v8.2.0, the
JSON_SCHEMA_VALID()
function is introduced. UsingJSON_SCHEMA_VALID()
in theCHECK
constraint can help prevent non-conforming data from being inserted, rather than checking the data after it has been added. This function lets you verify the validity of JSON data directly in TiDB, improving the integrity and consistency of the data, and increasing the development efficiency.For more information, see documentation.
DB operations
TiUP supports deploying PD microservices #5766 @rleungx
Starting from v8.0.0, PD supports the microservice mode. This mode splits the timestamp allocation and cluster scheduling functions of PD into separate microservices that can be deployed independently, thereby improving resource control and isolation, and reducing the impact between different services. Before v8.2.0, PD microservices can only be deployed using TiDB Operator.
Starting from v8.2.0, PD microservices can also be deployed using TiUP. You can deploy the
tso
microservice and thescheduling
microservice separately in a cluster to enhance PD performance scalability and address PD performance bottlenecks in large-scale clusters. It is recommended to use this mode when PD becomes a significant performance bottleneck that cannot be resolved by scaling up.For more information, see user documentation.
Add privilege control of switching resource groups #53440 @glorv
TiDB lets users switch to other resource groups using the
SET RESOURCE GROUP
command or theRESOURCE_GROUP()
hint, which might lead to resource group abuse by some database users. TiDB v8.2.0 introduces privilege control of switching resource groups. Only database users granted theRESOURCE_GROUP_ADMIN
orRESOURCE_GROUP_USER
dynamic privilege can switch to other resource groups, enhancing the protection of system resources.To maintain compatibility, the original behavior is retained when upgrading from earlier versions to v8.2.0 or later versions. To enable the enhanced privilege control, set the new variable
tidb_resource_control_strict_mode
toON
.For more information, see user documentation.
Observability
Record the reason why an execution plan is not cached #50618 @qw4990
In some scenarios, you might want to cache most execution plans to save execution overhead and reduce latency. Currently, execution plan caching has some limitations on SQL. Execution plans of some SQL statements cannot be cached. It is difficult to identify the SQL statements that cannot be cached and the corresponding reasons.
Therefore, starting from v8.2.0, new columns
PLAN_CACHE_UNQUALIFIED
andPLAN_CACHE_UNQUALIFIED_LAST_REASON
are added to the system tableSTATEMENTS_SUMMARY
to explain the reason why an execution plan cannot be cached, which can help you tune performance.For more information, see documentation.
Security
Enhance TiFlash log desensitization #8977 @JaySon-Huang
TiDB v8.0.0 enhances the log desensitization feature, enabling you to control whether user data in TiDB logs is wrapped in markers
‹ ›
. Based on the marked logs, you can decide whether to redact the marked information when displaying logs, thereby increasing the flexibility of log desensitization. In v8.2.0, TiFlash introduces a similar enhancement for log desensitization. To use this feature, set the TiFlash configuration itemsecurity.redact_info_log
tomarker
.For more information, see documentation.
Data migration
Align TiCDC Syncpoints across multiple changefeeds #11212 @hongyunyan
Before v8.2.0, aligning TiCDC Syncpoints across multiple changefeeds was challenging. The
startTs
of the changefeed had to be carefully selected when the changefeed was created, so it would align with the Syncpoints of other changefeeds. Starting from v8.2.0, Syncpoints for a changefeed are created as a multiple of the changefeed'ssync-point-interval
configuration. This change lets you align Syncpoints across multiple changefeeds that have the samesync-point-interval
configuration, simplifying and improving the ability to align multiple downstream clusters.For more information, see documentation.
TiCDC Pulsar Sink supports using the
pulsar+http
andpulsar+https
connection protocols #11336 @SandeepPadhiBefore v8.2.0, TiCDC Pulsar Sink only supports
pulsar
andpulsar+ssl
connection protocols. Starting from v8.2.0, TiCDC Pulsar Sink also supportspulsar+http
andpulsar+https
protocols for connections. This enhancement improves the flexibility of connecting to Pulsar.For more information, see documentation.
Compatibility changes
Behavior changes
When using TiDB Lightning to import a CSV file, if you set
strict-format = true
to split a large CSV file into multiple small CSV files to improve concurrency and import performance, you need to explicitly specifyterminator
. The values can be\r
,\n
or\r\n
. Failure to specify a line terminator might result in an exception when parsing the CSV file data. #37338 @lance6716When using
IMPORT INTO
to import a CSV file, if you specify theSPLIT_FILE
parameter to split a large CSV file into multiple small CSV files to improve concurrency and import performance, you need to explicitly specify the line terminatorLINES_TERMINATED_BY
. The values can be\r
,\n
or\r\n
. Failure to specify a line terminator might result in an exception when parsing the CSV file data. #37338 @lance6716Before BR v8.2.0, performing BR data restore on a cluster with TiCDC replication tasks is not supported. Starting from v8.2.0, BR relaxes the restrictions on data restoration for TiCDC: if the BackupTS (the backup time) of the data to be restored is earlier than the changefeed
CheckpointTS
(the timestamp that indicates the current replication progress), BR can proceed with the data restore normally. Considering thatBackupTS
is usually much earlier, it can be assumed that in most scenarios, BR supports restoring data for a cluster with TiCDC replication tasks. #53131 @YuJuncen
MySQL compatibility
- Before v8.2.0, executing the
CREATE USER
statement with thePASSWORD REQUIRE CURRENT DEFAULT
option returns an error because this option is not supported and cannot be parsed. Starting from v8.2.0, TiDB supports parsing and ignoring this option for compatibility with MySQL. #53305 @dveeden
System variables
Variable name | Change type | Description |
---|---|---|
tidb_analyze_distsql_scan_concurrency | Modified | Changes the minimum value from 1 to 0 . When you set it to 0 , TiDB adaptively adjusts the concurrency of the scan operation when executing the ANALYZE operation based on the cluster size. |
tidb_analyze_skip_column_types | Modified | Starting from v8.2.0, TiDB does not collect columns of MEDIUMTEXT and LONGTEXT types by default to avoid potential OOM risks. |
tidb_enable_historical_stats | Modified | Changes the default value from ON to OFF , which turns off historical statistics to avoid potential stability issues. |
tidb_executor_concurrency | Modified | Adds support for setting the concurrency of the sort operator. |
tidb_sysproc_scan_concurrency | Modified | Changes the minimum value from 1 to 0 . When you set it to 0 , TiDB adaptively adjusts the concurrency of scan operations performed when executing internal SQL statements based on the cluster size. |
tidb_resource_control_strict_mode | Newly added | Controls whether privilege control is applied to the SET RESOURCE GROUP statement and the RESOURCE_GROUP() optimizer hint. |
Configuration file parameters
Configuration file | Configuration parameter | Change type | Description |
---|---|---|---|
TiDB | stats-load-concurrency | Modified | Changes the default value from 5 to 0 , and the minimum value from 1 to 0 . The value 0 means the automatic mode, which automatically adjusts concurrency based on the configuration of the server. |
TiDB | token-limit | Modified | Changes the maximum value from 18446744073709551615 (64-bit platform) and 4294967295 (32-bit platform) to 1048576 to avoid causing TiDB Server OOM when setting it too large. It means that the number of sessions that can execute requests concurrently can be configured to a maximum of 1048576 . |
TiKV | max-apply-unpersisted-log-limit | Modified | Changes the default value from 0 to 1024 to reduce long-tail latency caused by I/O jitter on the TiKV node. It means that the maximum number of committed but not persisted Raft logs that can be applied is 1024 by default. |
TiKV | server.grpc-compression-type | Modified | This configuration item now also controls the compression algorithm of response messages sent from TiKV to TiDB. Enabling compression might consume more CPU resources. |
TiFlash | security.redact_info_log | Modified | Introduces a new value option marker . When you set the value to marker , all user data in the log is wrapped in ‹ › . |
System tables
- The
INFORMATION_SCHEMA.PROCESSLIST
andINFORMATION_SCHEMA.CLUSTER_PROCESSLIST
system tables add theSESSION_ALIAS
field to show the alias of the current session. #46889 @lcwangchao
Compiler versions
- To improve the TiFlash development experience, the minimum version of LLVM required to compile and build TiDB has been upgraded from 13.0 to 17.0. If you are a TiDB developer, you need to upgrade the version of your LLVM compiler to ensure a smooth build. #7193 @Lloyd-Pottiger
Deprecated features
The following features are deprecated starting from v8.2.0:
- Starting from v8.2.0, the
enable-replica-selector-v2
configuration item is deprecated. The new version of the Region replica selector is used by default when sending RPC requests to TiKV. - Starting from v8.2.0, the BR snapshot restore parameter
--concurrency
is deprecated. As an alternative, you can configure the maximum number of concurrent tasks per TiKV node during snapshot restore using--tikv-max-restore-concurrency
. - Starting from v8.2.0, the BR snapshot restore parameter
--granularity
is deprecated, and the coarse-grained Region scattering algorithm is enabled by default.
- Starting from v8.2.0, the
The following features are planned for deprecation in future versions:
- In v8.0.0, TiDB introduces the
tidb_enable_auto_analyze_priority_queue
system variable to control whether to enable the priority queue to optimize the ordering of automatic statistics collection tasks. In future versions, the priority queue will become the only way to order automatic statistics collection tasks, and thetidb_enable_auto_analyze_priority_queue
system variable will be deprecated. - In v8.0.0, TiDB introduces the
tidb_enable_parallel_hashagg_spill
system variable to control whether TiDB supports disk spill for the concurrent HashAgg algorithm. In future versions, thetidb_enable_parallel_hashagg_spill
system variable will be deprecated. - In v7.5.0, TiDB introduces the
tidb_enable_async_merge_global_stats
system variable to enable TiDB to merge partition statistics asynchronously to avoid OOM issues. In future versions, partition statistics will be merged asynchronously by default, and thetidb_enable_async_merge_global_stats
system variable will be deprecated. - It is planned to redesign the auto-evolution of execution plan bindings in subsequent releases, and the related variables and behavior will change.
- The TiDB Lightning parameter
conflict.max-record-rows
is planned for deprecation in a future release and will be subsequently removed. This parameter will be replaced byconflict.threshold
, which means that the maximum number of conflicting records is consistent with the maximum number of conflicting records that can be tolerated in a single import task.
- In v8.0.0, TiDB introduces the
The following features are planned for removal in future versions:
- Starting from v8.0.0, TiDB Lightning deprecates the old version of conflict detection strategy for the physical import mode, and enables you to control the conflict detection strategy for both logical and physical import modes via the
conflict.strategy
parameter. Theduplicate-resolution
parameter for the old version of conflict detection will be removed in a future release.
- Starting from v8.0.0, TiDB Lightning deprecates the old version of conflict detection strategy for the physical import mode, and enables you to control the conflict detection strategy for both logical and physical import modes via the
Improvements
TiDB
- Support parallel execution of logical DDL statements (General DDL). Compared with v8.1.0, when you use 10 sessions to submit different DDL statements concurrently, the performance is improved by 3 to 6 times #53246 @D3Hunter
- Improve the logic of matching multi-column indexes using expressions like
((a = 1 and b = 2 and c > 3) or (a = 4 and b = 5 and c > 6)) and d > 3
to produce a more accurateRange
#41598 @ghazalfamilyusa - Optimize the performance of obtaining data distribution information when performing simple queries on tables with large data volumes #53850 @you06
- The aggregated result set can be used as an inner table for IndexJoin, allowing more complex queries to be matched to IndexJoin, thus improving query efficiency through indexing #37068 @elsa0520
- By batch deleting TiFlash placement rules, improve the processing speed of data GC after performing the
TRUNCATE
orDROP
operation on partitioned tables #54068 @Lloyd-Pottiger - Upgrade the version of Azure Identity Libraries and Microsoft Authentication Library to enhance security #53990 @hawkingrei
- Set the maximum value of
token-limit
to1048576
to avoid causing TiDB Server OOM when setting it too large #53312 @djshow832 - Improve column pruning for MPP execution plans to improve TiFlash MPP execution performance #52133 @yibin87
- Optimize the performance overhead of the
IndexLookUp
operator when looking up a table with a large amount of data (>1024 rows) #53871 @crazycs520 - Remove stores without Regions during MPP load balancing #52313 @xzhangxian1008
TiKV
- Add the Compaction Job Size(files) metric to show the number of SST files involved in a single compaction job #16837 @zhangjinpeng87
- Enable the early apply feature by default. With this feature enabled, the Raft leader can apply logs after quorum peers have persisted the logs, without waiting for the leader itself to persist the log, reducing the impact of jitter in a few TiKV nodes on write request latency #16717 @glorv
- Improve the observability of Raft dropped messages to locate the root cause of slow writes #17093 @Connor1996
- Improve the observability of ingest files latency to troubleshoot cluster latency issues #17078 @LykxSassinator
- Use a separate thread to clean up Region replicas to ensure stable latency on critical Raft reads and writes #16001 @hbisheng
- Improve the observability of the number of snapshots being applied #17078 @hbisheng
PD
TiFlash
- Reduce lock conflicts under highly concurrent data read operations and optimize short query performance #9125 @JinheLin
- Eliminate redundant copies of the Join Key in the
Join
operator #9057 @gengliqi - Concurrently perform the process of converting a two-level hash table in the
HashAgg
operator #8956 @gengliqi - Remove redundant aggregation functions for the
HashAgg
operator to reduce computational overhead #8891 @guo-shaoge
Tools
Backup & Restore (BR)
- Optimize the backup feature, improving backup performance and stability during node restarts, cluster scaling-out, and network jitter when backing up large numbers of tables #52534 @3pointer
- Implement fine-grained checks of TiCDC changefeed during data restore. If the changefeed
CheckpointTS
is later than the data backup time, the restore operations are not affected, thereby reducing unnecessary wait times and improving user experience #53131 @YuJuncen - Add several commonly used parameters to the
BACKUP
statement and theRESTORE
statement, such asCHECKSUM_CONCURRENCY
#53040 @RidRisR - Except for the
br log restore
subcommand, all otherbr log
subcommands support skipping the loading of the TiDBdomain
data structure to reduce memory consumption #52088 @Leavrth - Support encryption of temporary files generated during log backup #15083 @YuJuncen
- Add a
tikv_log_backup_pending_initial_scan
monitoring metric in the Grafana dashboard #16656 @3pointer - Optimize the output format of PITR logs and add a
RestoreTS
field in the logs #53645 @dveeden
TiCDC
- Support directly outputting raw events when the downstream is a Message Queue (MQ) or cloud storage #11211 @CharlesCheung96
Bug fixes
TiDB
- Fix the issue that when a SQL statement contains an Outer Join and the Join condition includes the
false IN (column_name)
expression, the query result lacks some data #49476 @ghazalfamilyusa - Fix the issue that statistics for columns in system tables are collected when TiDB collects
PREDICATE COLUMNS
statistics for tables #53403 @hi-rustin - Fix the issue that the
tidb_enable_column_tracking
system variable does not take effect when thetidb_persist_analyze_options
system variable is set toOFF
#53478 @hi-rustin - Fix the issue of potential data races during the execution of
(*PointGetPlan).StatsInfo()
#49803 #43339 @qw4990 - Fix the issue that TiDB might return incorrect query results when you query tables with virtual columns in transactions that involve data modification operations #53951 @qw4990
- Fix the issue that the
tidb_enable_async_merge_global_stats
andtidb_analyze_partition_concurrency
system variables do not take effect during automatic statistics collection #53972 @hi-rustin - Fix the issue that TiDB might return the
plan not supported
error when you queryTABLESAMPLE
#54015 @tangenta - Fix the issue that executing the
SELECT DISTINCT CAST(col AS DECIMAL), CAST(col AS SIGNED) FROM ...
query might return incorrect results #53726 @hawkingrei - Fix the issue that queries cannot be terminated after a data read timeout on the client side #44009 @wshwsh12
- Fix the overflow issue of the
Longlong
type in predicates #45783 @hawkingrei - Fix the issue that the Window function might panic when there is a related subquery in it #42734 @hi-rustin
- Fix the issue that the TopN operator might be pushed down incorrectly #37986 @qw4990
- Fix the issue that
SELECT INTO OUTFILE
does not work when clustered indexes are used as predicates #42093 @qw4990 - Fix the issue that the query latency of stale reads increases, caused by information schema cache misses #53428 @crazycs520
- Fix the issue that comparing a column of
YEAR
type with an unsigned integer that is out of range causes incorrect results #50235 @qw4990 - Fix the issue that the histogram and TopN in the primary key column statistics are not loaded after restarting TiDB #37548 @hawkingrei
- Fix the issue that the
final
AggMode and thenon-final
AggMode cannot coexist in Massively Parallel Processing (MPP) #51362 @AilinKid - Fix the issue that TiDB panics when executing the
SHOW ERRORS
statement with a predicate that is alwaystrue
#46962 @elsa0520 - Fix the issue that using a view does not work in recursive CTE #49721 @hawkingrei
- Fix the issue that TiDB might report an error due to GC when loading statistics at startup #53592 @you06
- Fix the issue that
PREPARE
/EXECUTE
statements with theCONV
expression containing a?
argument might result in incorrect query results when executed multiple times #53505 @qw4990 - Fix the issue that non-BIGINT unsigned integers might produce incorrect results when compared with strings/decimals #41736 @LittleFall
- Fix the issue that TiDB does not create corresponding statistics metadata (
stats_meta
) when creating a table with foreign keys #53652 @hawkingrei - Fix the issue that certain filter conditions in queries might cause the planner module to report an
invalid memory address or nil pointer dereference
error #53582 #53580 #53594 #53603 @YangKeao - Fix the issue that executing
CREATE OR REPLACE VIEW
concurrently might result in thetable doesn't exist
error #53673 @tangenta - Fix the issue that the
STATE
field in theINFORMATION_SCHEMA.TIDB_TRX
table is empty due to thesize
of theSTATE
field not being defined #53026 @cfzjywxk - Fix the issue that the
Distinct_count
information in GlobalStats might be incorrect whentidb_enable_async_merge_global_stats
is disabled #53752 @hawkingrei - Fix the issue of incorrect WARNINGS information when using Optimizer Hints #53767 @hawkingrei
- Fix the issue that negating a time type results in an incorrect value #52262 @solotzg
- Fix the issue that
REGEXP()
does not explicitly report an error for empty pattern arguments #53221 @yibin87 - Fix the issue that converting JSON to datetime might lose precision in some cases #53352 @YangKeao
- Fix the issue that
JSON_QUOTE()
returns incorrect results in some cases #37294 @dveeden - Fix the issue that executing
ALTER TABLE ... REMOVE PARTITIONING
might cause data loss #53385 @mjonss - Fix the issue that TiDB fails to reject unauthenticated user connections in some cases when using the
auth_socket
authentication plugin #54031 @lcwangchao - Fix the issue that JSON-related functions return errors inconsistent with MySQL in some cases #53799 @dveeden
- Fix the issue that the
INDEX_LENGTH
field of partitioned tables inINFORMATION_SCHEMA.PARTITIONS
is incorrect #54173 @Defined2014 - Fix the issue that the
TIDB_ROW_ID_SHARDING_INFO
field in theINFORMATION_SCHEMA.TABLES
table is incorrect #52330 @tangenta - Fix the issue that a generated column returns illegal timestamps #52509 @lcwangchao
- Fix the issue that setting
max-index-length
causes TiDB to panic when adding indexes using the Distributed eXecution Framework (DXF) #53281 @zimulala - Fix the issue that the illegal column type
DECIMAL(0,0)
can be created in some cases #53779 @tangenta - Fix the issue that using
CURRENT_DATE()
as the default value for a column results in incorrect query results #53746 @tangenta - Fix the issue that the
ALTER DATABASE ... SET TIFLASH REPLICA
statement incorrectly adds TiFlash replicas to theSEQUENCE
table #51990 @jiyfhust - Fix the issue that the
REFERENCED_TABLE_SCHEMA
field in theINFORMATION_SCHEMA.KEY_COLUMN_USAGE
table is incorrect #52350 @wd0517 - Fix the issue that inserting multiple rows in a single statement causes the
AUTO_INCREMENT
column to be discontinuous whenAUTO_ID_CACHE=1
#52465 @tiancaiamao - Fix the format of deprecation warnings #52515 @dveeden
- Fix the issue that the
TRACE
command is missing incopr.buildCopTasks
#53085 @time-and-fate - Fix the issue that the
memory_quota
hint might not work in subqueries #53834 @qw4990 - Fix the issue that improper use of metadata locks might lead to writing anomalous data when using the plan cache under certain circumstances #53634 @zimulala
- Fix the issue that after a statement within a transaction is killed by OOM, if TiDB continues to execute the next statement within the same transaction, you might get an error
Trying to start aggressive locking while it's already started
and a panic occurs #53540 @MyonKeminta
- Fix the issue that when a SQL statement contains an Outer Join and the Join condition includes the
TiKV
- Fix the issue that pushing down the
JSON_ARRAY_APPEND()
function to TiKV causes TiKV to panic #16930 @dbsid - Fix the issue that the leader does not clean up failed snapshot files in time #16976 @hbisheng
- Fix the issue that highly concurrent Coprocessor requests might cause TiKV OOM #16653 @overvenus
- Fix the issue that changing the
raftstore.periodic-full-compact-start-times
configuration item online might cause TiKV to panic #17066 @SpadeA-Tang - Fix the failure of
make docker
andmake docker_test
#17075 @shunki-fujita - Fix the issue that the gRPC request sources duration metric is displayed incorrectly in the monitoring dashboard #17133 @King-Dylan
- Fix the issue that setting the gRPC message compression method via
grpc-compression-type
does not take effect on messages sent from TiKV to TiDB #17176 @ekexium - Fix the issue that the output of the
raft region
command in tikv-ctl does not include the Region status information #17037 @glorv - Fix the issue that CDC and log-backup do not limit the timeout of
check_leader
using theadvance-ts-interval
configuration, causing theresolved_ts
lag to be too large when TiKV restarts normally in some cases #17107 @MyonKeminta
- Fix the issue that pushing down the
PD
- Fix the issue that
ALTER PLACEMENT POLICY
cannot modify the placement policy #52257 #51712 @jiyfhust - Fix the issue that the scheduling of write hotspots might break placement policy constraints #7848 @lhy1024
- Fix the issue that down peers might not recover when using Placement Rules #7808 @rleungx
- Fix the issue that a large number of retries occur when canceling resource groups queries #8217 @nolouch
- Fix the issue that manually transferring the PD leader might fail #8225 @HuSharp
- Fix the issue that
TiFlash
- Fix the issue of query timeout when executing queries on partitioned tables that contain empty partitions #9024 @JinheLin
- Fix the issue that in the disaggregated storage and compute architecture, null values might be incorrectly returned in queries after adding non-null columns in DDL operations #9084 @Lloyd-Pottiger
- Fix the issue that the
SUBSTRING_INDEX()
function might cause TiFlash to crash in some corner cases #9116 @wshwsh12 - Fix the issue that a large number of duplicate rows might be read in FastScan mode after importing data via BR or TiDB Lightning #9118 @JinheLin
Tools
Backup & Restore (BR)
- Fix the issue that BR fails to restore a transactional KV cluster due to an empty
EndKey
#52574 @3pointer - Fix the issue that a PD connection failure could cause the TiDB instance where the log backup advancer owner is located to panic #52597 @YuJuncen
- Fix the issue that log backup might be paused after the advancer owner migration #53561 @RidRisR
- Fix the issue that BR fails to correctly identify errors due to multiple nested retries during the restore process #54053 @RidRisR
- Fix the issue that the connection used to fetch TiKV configurations might not be closed #52595 @RidRisR
- Fix the issue that the
TestStoreRemoved
test case is unstable #52791 @YuJuncen - Fix the issue that TiFlash crashes during point-in-time recovery (PITR) #52628 @RidRisR
- Fix the inefficiency issue in scanning DDL jobs during incremental backups #54139 @3pointer
- Fix the issue that the backup performance during checkpoint backups is affected due to interruptions in seeking Region leaders #17168 @Leavrth
- Fix the issue that BR fails to restore a transactional KV cluster due to an empty
TiCDC
- Fix inaccurate display of the Kafka Outgoing Bytes panel in Grafana #10777 @asddongmen
- Fix the issue that data inconsistency might occur when restarting Changefeed repeatedly when performing a large number of
UPDATE
operations in a multi-node environment #11219 @lidezhu
TiDB Data Migration (DM)
TiDB Lightning
Dumpling
TiDB Binlog
Contributors
We would like to thank the following contributors from the TiDB community:
- CabinfeverB
- DanRoscigno (First-time contributor)
- ei-sugimoto (First-time contributor)
- eltociear
- jiyfhust
- michaelmdeng (First-time contributor)
- mittalrishabh
- onlyacat
- qichengzx (First-time contributor)
- SeaRise
- shawn0915
- shunki-fujita (First-time contributor)
- tonyxuqqi
- wwu (First-time contributor)
- yzhan1