TiDB 8.5.0 Release Notes

Get future Long-Term Support (LTS) release notices via email:

Release date: December 19, 2024

TiDB version: 8.5.0

Quick access: Quick start | Production deployment

TiDB 8.5.0 is a Long-Term Support Release (LTS).

Compared with the previous LTS 8.1.0, 8.5.0 includes new features, improvements, and bug fixes released in 8.2.0-DMR, 8.3.0-DMR, and 8.4.0-DMR. When you upgrade from 8.1.x to 8.5.0, you can download the TiDB Release Notes PDF to view all release notes between the two LTS versions. The following table lists some highlights from 8.1.0 to 8.5.0:

CategoryFeature/EnhancementDescription
Scalability and PerformanceReduce data processing latency in multiple dimensionsTiDB continuously refines data processing to improve performance, effectively meeting the low-latency SQL processing requirements in financial scenarios. Key updates include:
  • Support parallel sorting (introduced in v8.2.0)
  • Optimize batch processing strategy for KV (key-value) requests (introduced in v8.3.0)
  • Support parallel mode for TSO requests (introduced in v8.4.0)
  • Reduce the resource overhead of DELETE operations (introduced in v8.4.0)
  • Improve query performance for cached tables (introduced in v8.4.0)
  • Introduce an optimized version of Hash Join (experimental, introduced in v8.4.0)
  • TiKV MVCC In-Memory Engine (IME) (introduced in v8.5.0)The TiKV MVCC in-memory engine caches the most recent MVCC versions of data in memory, helping TiKV quickly skip older versions and retrieve the latest data. This feature can significantly improve data scan performance in scenarios where data records are frequently updated or historical versions are retained for a longer period.
    Use Active PD Followers to enhance PD's Region information query service (GA in v8.5.0)TiDB v7.6.0 introduces an experimental feature "Active PD Follower", which allows PD followers to provide Region information query services. This feature improves the capability of the PD cluster to handle GetRegion and ScanRegions requests in clusters with a large number of TiDB nodes and Regions, thereby reducing the CPU pressure on PD leaders. In v8.5.0, this feature becomes generally available (GA).
    Instance-level execution plan cache (experimental, introduced in v8.4.0) Instance-level plan cache allows all sessions within the same TiDB instance to share the plan cache. Compared with session-level plan cache, this feature reduces SQL compilation time by caching more execution plans in memory, decreasing overall SQL execution time. It improves OLTP performance and throughput while providing better control over memory usage and enhancing database stability.
    Global indexes for partitioned tables (GA in v8.4.0)Global indexes can effectively improve the efficiency of retrieving non-partitioned columns, and remove the restriction that a unique key must contain the partition key. This feature extends the usage scenarios of TiDB partitioned tables, improves the performance of partitioned tables, and reduces resource consumption in certain query scenarios.
    Default pushdown of the Projection operator to the storage engine (introduced in v8.3.0) Pushing the Projection operator down to the storage engine can distribute the load across storage nodes while reducing data transfer between nodes. This optimization helps to reduce the execution time for certain SQL queries and improves the overall database performance.
    Ignoring unnecessary columns when collecting statistics (introduced in v8.3.0) Under the premise of ensuring that the optimizer can obtain the necessary information, TiDB speeds up statistics collection, improves the timeliness of statistics, and thus ensures that the optimal execution plan is selected, improving the performance of the cluster. Meanwhile, TiDB also reduces the system overhead and improves the resource utilization.
    Reliability and availabilityImprove the stability of large-scale clustersCompanies that use TiDB to run multi-tenant or SaaS applications often need to store a large number of tables. In v8.5.0, TiDB significantly enhances the stability of large-scale clusters.
  • Schema cache control and setting the memory quota for the TiDB statistics cache are generally available (GA), reducing stability issues caused by excessive memory consumption.
  • PD introduces Active Follower to handle the pressure brought by numerous Regions, gradually decouples the services handled by PD for independent deployment.
  • PD improves the performance of Region heartbeat processing and supports tens of millions of Regions for a single cluster.
  • You can increase concurrency and reduce the number of collected objects to improve the efficiency of statistics collection and loading, ensuring the stability of execution plans in large clusters.
  • Support more triggers for runaway queries, and support switching resource groups (introduced in v8.4.0) Runaway Queries offer an effective way to mitigate the impact of unexpected SQL performance issues on systems. TiDB v8.4.0 introduces the number of keys processed by the Coprocessor (PROCESSED_KEYS) and request units (RU) as identifying conditions, and puts identified queries into the specified resource group for more precise identification and control of runaway queries.
    Support setting the maximum limit on resource usage for background tasks of resource control (experimental, introduced in v8.4.0) By setting a maximum percentage limit on background tasks of resource control, you can control their resource consumption based on the needs of different application systems. This keeps background task consumption at a low level and ensures the quality of online services.
    Enhance and expand TiProxy use casesAs a crucial component of the high availability of TiDB, TiProxy extends its capabilities beyond SQL traffic access and forwarding to support cluster change evaluation. Key features include:
  • TiProxy supports traffic capture and replay (experimental, introduced in v8.4.0)
  • TiProxy supports built-in virtual IP management (introduced in v8.3.0)
  • TiProxy supports multiple load balancing policies (introduced in v8.2.0)
  • The parallel HashAgg algorithm of TiDB supports disk spill (GA in v8.2.0) HashAgg is a widely used aggregation operator in TiDB for efficiently aggregating rows with the same field values. TiDB v8.0.0 introduces parallel HashAgg as an experimental feature to further enhance processing speed. When memory resources are insufficient, parallel HashAgg spills temporary sorted data to disk, avoiding potential OOM risks caused by excessive memory usage. This improves query performance while maintaining node stability. In v8.2.0, this feature becomes generally available (GA) and is enabled by default, enabling you to safely configure the concurrency of parallel HashAgg using tidb_executor_concurrency.
    SQL Foreign key (GA in v8.5.0)Foreign keys are constraints in a database that establish relationships between tables, ensuring data consistency and integrity. They ensure that the data referenced in a child table exist in the parent table, preventing the insertion of invalid data. Foreign keys also support cascading operations (such as automatic synchronization during deletion or update), simplifying business logic implementation and reducing the complexity of manually maintaining data relationships.
    Vector search (experimental, introduced in v8.4.0) Vector search is a search method based on data semantics, which provides more relevant search results. As one of the core functions of AI and large language models (LLMs), vector search can be used in various scenarios such as Retrieval-Augmented Generation (RAG), semantic search, and recommendation systems.
    DB Operations and ObservabilityDisplay TiKV and TiDB CPU times in memory tables (introduced in v8.4.0) The CPU time is now integrated into a system table, displayed alongside other metrics for sessions or SQL, letting you observe high CPU consumption operations from multiple perspectives, and improving diagnostic efficiency. This is especially useful for diagnosing scenarios such as CPU spikes in instances or read/write hotspots in clusters.
    Support viewing aggregated TiKV CPU time by table or database (introduced in v8.4.0) When hotspot issues are not caused by individual SQL statements, using the aggregated CPU time by table or database level in Top SQL can help you quickly identify the tables or applications responsible for the hotspots, significantly improving the efficiency of diagnosing hotspot and CPU consumption issues.
    Backup & Restore (BR) uses AWS SDK for Rust to access external storage (introduced in v8.5.0)BR replaces the original Rusoto library with AWS SDK for Rust to access external storage such as Amazon S3 from TiKV. This change enhances compatibility with AWS features such as IMDSv2 and EKS Pod Identity.
    SecurityClient-side encryption of snapshot backup data and log backup data (GA in v8.5.0)Before uploading backup data to your backup storage, you can encrypt the backup data to ensure its security during storage and transmission.

    Feature details

    Scalability

    • Setting the memory limit for schema cache is now generally available (GA). When the number of tables reaches hundreds of thousands or even millions, this feature significantly reduces the memory usage of schema metadata #50959 @tiancaiamao @wjhuang2016 @gmhdbjd @tangenta

      In some SaaS scenarios, where the number of tables reaches hundreds of thousands or even millions, schema metadata can consume a significant amount of memory. With this feature enabled, TiDB uses the Least Recently Used (LRU) algorithm to cache and evict the corresponding schema metadata, effectively reducing memory usage.

      Starting from v8.4.0, this feature is enabled by default with a default value of 536870912 (that is, 512 MiB). You can adjust it as needed using the variable tidb_schema_cache_size.

      For more information, see documentation.

    • Provide the Active PD Follower feature to enhance the scalability of PD's Region information query service (GA) #7431 @okJiang

      In a TiDB cluster with a large number of Regions, the PD leader might experience high CPU load due to the increased overhead of handling heartbeats and scheduling tasks. If the cluster has many TiDB instances, and there is a high concurrency of requests for Region information, the CPU pressure on the PD leader increases further and might cause PD services to become unavailable.

      To ensure high availability, TiDB v7.6.0 introduces Active PD Follower as an experimental feature to enhance the scalability of PD's Region information query service. In v8.5.0, this feature becomes generally available (GA). You can enable the Active PD Follower feature by setting the system variable pd_enable_follower_handle_region to ON. After this feature is enabled, TiDB evenly distributes Region information requests to all PD servers, and PD followers can also handle Region requests, thereby reducing the CPU pressure on the PD leader.

      For more information, see documentation.

    Performance

    • TiDB accelerated table creation becomes generally available (GA), significantly reducing data migration and cluster initialization time #50052 @D3Hunter @gmhdbjd

      TiDB v7.6.0 introduces accelerated table creation as an experimental feature, controlled by the system variable tidb_ddl_version. Staring from v8.0.0, this system variable is renamed to tidb_enable_fast_create_table.

      In v8.5.0, TiDB accelerated table creation becomes generally available (GA) and is enabled by default. During data migration and cluster initialization, this feature supports rapid creation of millions of tables, significantly reducing operation time.

      For more information, see documentation.

    • TiKV supports the MVCC in-memory engine (IME), which accelerates queries involving scans of extensive MVCC historical versions #16141 @SpadeA-Tang @glorv @overvenus

      When records are frequently updated, or TiDB is required to retain historical versions for extended periods (for example, 24 hours), the accumulation of MVCC versions can degrade scan performance. The TiKV MVCC in-memory engine improves scan performance by caching the latest MVCC versions in memory, and using a rapid GC mechanism to remove historical versions from memory.

      Starting from v8.5.0, TiKV introduces MVCC in-memory engine. If the accumulation of MVCC versions in the TiKV cluster leads to degraded scan performance, you can enable the TiKV MVCC in-memory engine to improve scan performance by setting the TiKV configuration parameter in-memory-engine.enable.

      For more information, see documentation.

    Reliability

    • Support limiting the maximum rate and concurrency of requests processed by PD #5739 @rleungx

      When a sudden influx of requests is sent to PD, it can lead to high workloads and potentially affect PD performance. Starting from v8.5.0, you can use pd-ctl to limit the maximum rate and concurrency of requests processed by PD, improving its stability.

      For more information, see documentation.

    SQL

    • Support foreign keys (GA) #36982 @YangKeao @crazycs520

      The foreign key feature becomes generally available (GA) in v8.5.0. Foreign key constraints help ensure data consistency and integrity. You can easily establish foreign key relationships between tables, with support for cascading updates and deletions, simplifying data management. This feature enhances support for applications with complex data relationships.

      For more information, see documentation.

    • Introduce the ADMIN ALTER DDL JOBS statement to support modifying the DDL jobs online #57229 @fzzf678 @tangenta

      Starting from v8.3.0, you can set the variables tidb_ddl_reorg_batch_size and tidb_ddl_reorg_worker_cnt at the session level. As a result, setting these two variables globally no longer affects all running DDL jobs. To modify the values of these variables, you need to cancel the DDL job first, adjust the variables, and then resubmit the job.

      TiDB v8.5.0 introduces the ADMIN ALTER DDL JOBS statement, letting you adjust the variable values of specific DDL jobs online. This enables flexible balancing of resource consumption and performance. The changes are limited to individual jobs, making the impact more controllable. For example:

      • ADMIN ALTER DDL JOBS job_id THREAD = 8;: adjusts the tidb_ddl_reorg_worker_cnt of the specified DDL job online.
      • ADMIN ALTER DDL JOBS job_id BATCH_SIZE = 256;: adjusts the tidb_ddl_reorg_batch_size of the specified job online.
      • ADMIN ALTER DDL JOBS job_id MAX_WRITE_SPEED = '200MiB';: adjusts the write traffic of index data to each TiKV node online.

      For more information, see documentation.

    Security

    • BR supports client-side encryption of both full backup data and log backup data (GA) #28640 #56433 @joccau @Tristan1900

      • Client-side encryption of full backup data (introduced as experimental in TiDB v5.3.0) enables you to encrypt backup data on the client side using a custom fixed key.

      • Client-side encryption of log backup data (introduced as experimental in TiDB v8.4.0) enables you to encrypt log backup data on the client side using one of the following methods:

        • Encrypt using a custom fixed key
        • Encrypt using a master key stored on a local disk
        • Encrypt using a master key managed by a Key Management Service (KMS)

      Starting from v8.5.0, both encryption features become generally available (GA), offering enhanced client-side data security.

      For more information, see Encrypt the backup data and Encrypt the log backup data.

    • TiKV encryption at rest supports Google Cloud Key Management Service (Google Cloud KMS) (GA) #8906 @glorv

      TiKV ensures data security by using the encryption at rest technique to encrypt stored data. The core aspect of this technique is proper key management. In v8.0.0, TiKV encryption at rest experimentally supports using Google Cloud KMS for master key management.

      Starting from v8.5.0, encryption at rest using Google Cloud KMS becomes generally available (GA). To use this feature, first create a key on Google Cloud, and then configure the [security.encryption.master-key] section in the TiKV configuration file.

      For more information, see documentation.

    Compatibility changes

    Behavior changes

    • In non-strict mode (sql_mode = ''), inserting NULL values into non-NULL columns now returns an error for MySQL compatibility. #55457 @joechenrh
    • The ALTER TABLE ... DROP FOREIGN KEY IF EXISTS ... statement is no longer supported. #56703 @YangKeao

    System variables

    Variable nameChange typeDescription
    tidb_enable_fast_create_tableModifiedChanges the default value from OFF to ON after further tests, meaning that the accelerated table creation feature is enabled by default.
    tidb_ddl_reorg_max_write_speedNewly addedLimits the write bandwidth for each TiKV node and only takes effect when index creation acceleration is enabled (controlled by the tidb_ddl_enable_fast_reorg variable). For example, setting the variable to 200MiB limits the maximum write speed to 200 MiB/s.

    Configuration parameters

    Configuration file or componentConfiguration parameterChange typeDescription
    TiDBdeprecate-integer-display-lengthModifiedStarting from v8.5.0, the integer display width feature is deprecated. The default value of this configuration item is changed from false to true.
    TiKVraft-client-queue-sizeModifiedChanges the default value from 8192 to 16384.
    PDpatrol-region-worker-countNewly addedControls the number of concurrent operators created by the checker when inspecting the health state of a Region.
    BR--checksumModifiedChanges the default value from true to false, meaning that BR does not calculate the table-level checksum during full backups by default, to improve backup performance.

    Removed features

    • The following feature has been removed:

      • In v8.4.0, TiDB Binlog is removed. Starting from v8.3.0, TiDB Binlog is fully deprecated. For incremental data replication, use TiCDC instead. For point-in-time recovery (PITR), use PITR. Before you upgrade your TiDB cluster to v8.4.0 or later versions, be sure to switch to TiCDC and PITR.
    • The following features are planned for removal in future versions:

      • Starting from v8.0.0, TiDB Lightning deprecates the old version of conflict detection strategy for the physical import mode, and enables you to control the conflict detection strategy for both logical and physical import modes via the conflict.strategy parameter. The duplicate-resolution parameter for the old version of conflict detection will be removed in a future release.

    Deprecated features

    The following features are planned for deprecation in future versions:

    • In v8.0.0, TiDB introduces the tidb_enable_auto_analyze_priority_queue system variable to control whether priority queues are enabled to optimize the ordering of tasks that automatically collect statistics. In future releases, the priority queue will be the only way to order tasks for automatically collecting statistics, so this system variable will be deprecated.
    • In v7.5.0, TiDB introduces the tidb_enable_async_merge_global_stats system variable. You can use it to set TiDB to use asynchronous merging of partition statistics to avoid OOM issues. In future releases, partition statistics will be merged asynchronously, so this system variable will be deprecated.
    • It is planned to redesign the automatic evolution of execution plan bindings in subsequent releases, and the related variables and behavior will change.
    • In v8.0.0, TiDB introduces the tidb_enable_parallel_hashagg_spill system variable to control whether TiDB supports disk spill for the concurrent HashAgg algorithm. In future versions, this system variable will be deprecated.
    • In v5.1, TiDB introduces the tidb_partition_prune_mode system variable to control whether to enable the dynamic pruning mode for partitioned tables. Starting from v8.5.0, a warning is returned when you set this variable to static or static-only. In future versions, this system variable will be deprecated.
    • The TiDB Lightning parameter conflict.max-record-rows is planned for deprecation in a future release and will be subsequently removed. This parameter will be replaced by conflict.threshold, which means that the maximum number of conflicting records is consistent with the maximum number of conflicting records that can be tolerated in a single import task.
    • Starting from v6.3.0, partitioned tables use dynamic pruning mode by default. Compared with static pruning mode, dynamic pruning mode supports features such as IndexJoin and plan cache with better performance. Therefore, static pruning mode will be deprecated.

    Improvements

    • TiDB

      • Improve the response speed of job cancellation for the ADD INDEX acceleration feature when disabling the Distributed eXecution Framework (DXF) #56017 @lance6716
      • Improve the speed of adding indexes to small tables #54230 @tangenta
      • Add a new system variable tidb_ddl_reorg_max_write_speed to limit the maximum speed of the ingest phase when adding indexes #57156 @CbcWestwolf
      • Improve the performance of querying information_schema.tables in some cases #57295 @tangenta
      • Support dynamically adjusting more DDL job parameters #57526 @fzzf678
      • Support global indexes that contain all columns from a partition expression #56230 @Defined2014
      • Support partition pruning for list partitioned tables in range query scenarios #56673 @Defined2014
      • Enable FixControl#46177 by default to fix the issue that a full table scan is incorrectly selected instead of an index range scan in some cases #46177 @terry1purcell
      • Improve the internal estimation logic to better utilize statistics of multi-column and multi-value indexes, enhancing estimation accuracy for certain queries involving multi-value indexes #56915 @time-and-fate
      • Improve the cost estimation for full table scans in specific scenarios, reducing the probability of incorrectly choosing a full table scan #57085 @terry1purcell
      • Optimize the amount of data required for synchronous loading of statistics to improve loading performance #56812 @winoros
      • Optimize the execution plan in specific cases where an OUTER JOIN involves a unique index and an ORDER BY ... LIMIT clause, improving execution efficiency #56321 @winoros
    • TiKV

      • Use a separate thread to clean up replicas, ensuring stable latency for critical paths of Raft reads and writes #16001 @hbisheng
      • Improve the performance of the vector distance function by supporting SIMD #17290 @EricZequan
    • PD

      • Support dynamic switching of the tso service between microservice and non-microservice modes #8477 @rleungx
      • Optimize the case format of certain fields in the pd-ctl config output #8694 @lhy1024
      • Store limit v2 becomes generally available (GA) #8865 @lhy1024
      • Support configuring Region inspection concurrency (experimental) #8866 @lhy1024
    • TiFlash

      • Improve the garbage collection speed of outdated data in the background for tables with clustered indexes #9529 @JaySon-Huang
      • Improve query performance of vector search in data update scenarios #9599 @Lloyd-Pottiger
      • Add monitoring metrics for CPU usage during vector index building #9032 @JaySon-Huang
      • Improve the execution efficiency of logical operators #9146 @windtalker
    • Tools

      • Backup & Restore (BR)

        • Reduce unnecessary log printing during backup #55902 @Leavrth
        • Optimize the error message for the encryption key --crypter.key #56388 @Tristan1900
        • Increase concurrency in BR when creating databases to improve data restore performance #56866 @Leavrth
        • Disable the table-level checksum calculation during full backups by default (--checksum=false) to improve backup performance #56373 @Tristan1900
        • Add a mechanism to independently track and reset the connection timeout for each storage node, enhancing the handling of slow nodes and preventing backup operations from hanging #57666 @3pointer
      • TiDB Data Migration (DM)

        • Add retries for DM-worker to connect to DM-master during DM cluster startup #4287 @GMHDBJD

    Bug fixes

    • TiDB

      • Fix the issue that TiDB does not automatically retry requests when the Region metadata returned from PD lacks Leader information, potentially causing execution errors #56757 @cfzjywxk
      • Fix the issue that TTL tasks cannot be canceled when there is a write conflict #56422 @YangKeao
      • Fix the issue that when canceling a TTL task, the corresponding SQL is not killed forcibly #56511 @lcwangchao
      • Fix the issue that existing TTL tasks are executed unexpectedly frequently in a cluster that is upgraded from v6.5 to v7.5 or later #56539 @lcwangchao
      • Fix the issue that the INSERT ... ON DUPLICATE KEY statement is not compatible with mysql_insert_id #55965 @tiancaiamao
      • Fix the issue that TTL might fail if TiKV is not selected as the storage engine #56402 @YangKeao
      • Fix the issue that the AUTO_INCREMENT field is not correctly set after importing data using the IMPORT INTO statement #56476 @D3Hunter
      • Fix the issue that TiDB does not check the index length limitation when executing ADD INDEX #56930 @fzzf678
      • Fix the issue that executing RECOVER TABLE BY JOB JOB_ID; might cause TiDB to panic #55113 @crazycs520
      • Fix the issue that stale read does not strictly verify the timestamp of the read operation, resulting in a small probability of affecting the consistency of the transaction when an offset exists between the TSO and the real physical time #56809 @MyonKeminta
      • Fix the issue that TiDB could not resume Reorg DDL tasks from the previous progress after the DDL owner node is switched #56506 @tangenta
      • Fix the issue that some metrics in the monitoring panel of Distributed eXecution Framework (DXF) are inaccurate #57172 @fzzf678 #56942 @fzzf678
      • Fix the issue that REORGANIZE PARTITION fails to return error reasons in certain cases #56634 @mjonss
      • Fix the issue that querying INFORMATION_SCHEMA.TABLES returns incorrect results due to case sensitivity #56987 @joechenrh
      • Fix the issue of illegal memory access that might occur when a Common Table Expression (CTE) has multiple data consumers and one consumer exits without reading any data #55881 @windtalker
      • Fix the issue that INDEX_HASH_JOIN might hang during an abnormal exit #54055 @wshwsh12
      • Fix the issue that the TRUNCATE statement returns incorrect results when handling NULL values #53546 @tuziemon
      • Fix the issue that the CAST AS CHAR function returns incorrect results due to type inference errors #56640 @zimulala
      • Fix the issue of truncated strings in the output of some functions due to type inference errors #56587 @joechenrh
      • Fix the issue that the ADDTIME() and SUBTIME() functions returns incorrect results when their first argument is a date type #57569 @xzhangxian1008
      • Fix the issue that invalid NULL values can be inserted in non-strict mode (sql_mode = '') #56381 @joechenrh
      • Fix the issue that the UPDATE statement incorrectly updates values of the ENUM type #56832 @xhebox
      • Fix the issue that enabling the tidb_low_resolution_tso variable causes resource leaks during the execution of SELECT FOR UPDATE statements #55468 @tiancaiamao
      • Fix the issue that the JSON_TYPE() function does not validate the parameter type, causing no errors returned when a non-JSON data type is passed #54029 @YangKeao
      • Fix the issue that using JSON functions in PREPARE statements might cause execution failures #54044 @YangKeao
      • Fix the issue that converting data from the BIT type to the CHAR type might cause TiKV panics #56494 @lcwangchao
      • Fix the issue that using variables or parameters in the CREATE VIEW statement does not report errors #53176 @mjonss
      • Fix the issue that the JSON_VALID() function returns incorrect results #56293 @YangKeao
      • Fix the issue that TTL tasks are not canceled after the tidb_ttl_job_enable variable is disabled #57404 @YangKeao
      • Fix the issue that using the RANGE COLUMNS partition function and the utf8mb4_0900_ai_ci collation at the same time could result in incorrect query results #57261 @Defined2014
      • Fix the runtime error caused by executing a prepared statement that begins with a newline character, resulting in an array out of bounds #54283 @Defined2014
      • Fix the precision issue in the UTC_TIMESTAMP() function, such as setting the precision too high #56451 @chagelo
      • Fix the issue that foreign key errors are not omitted in UPDATE, INSERT, and DELETE IGNORE statements #56678 @YangKeao
      • Fix the issue that when querying the information_schema.cluster_slow_query table, if the time filter is not added, only the latest slow log file is queried #56100 @crazycs520
      • Fix the issue of memory leaks in TTL tables #56934 @lcwangchao
      • Fix the issue that foreign key constraints do not take effect for tables in write_only status, preventing using tables in non-public status #55813 @YangKeao
      • Fix the issue that using subqueries after the NATURAL JOIN or USING clause might result in errors #53766 @dash12653
      • Fix the issue that if a CTE contains the ORDER BY, LIMIT, or SELECT DISTINCT clause and is referenced by the recursive part of another CTE, it might be incorrectly inlined and result in an execution error #56603 @elsa0520
      • Fix the issue that the CTE defined in VIEW is incorrectly inlined #56582 @elsa0520
      • Fix the issue that Plan Replayer might report an error when importing a table structure containing foreign keys #56456 @hawkingrei
      • Fix the issue that Plan Replayer might report an error when importing a table structure containing Placement Rules #54961 @hawkingrei
      • Fix the issue that when using ANALYZE to collect statistics for a table, if the table contains expression indexes of virtually generated columns, the execution reports an error #57079 @hawkingrei
      • Fix the issue that the DROP DATABASE statement does not correctly trigger the corresponding update in statistics #57227 @Rustin170506
      • Fix the issue that when parsing a database name in CTE, it returns a wrong database name #54582 @hawkingrei
      • Fix the issue that the upper bound and lower bound of the histogram are corrupted when DUMP STATS is transforming statistics into JSON #56083 @hawkingrei
      • Fix the issue that EXISTS subquery results, when further involved in algebraic operations, could differ from the results in MySQL #56641 @windtalker
      • Fix the issue that execution plan bindings cannot be created for the multi-table DELETE statement with aliases #56726 @hawkingrei
      • Fix the issue that the optimizer does not take into account the character set and collations when simplifying complex predicates, resulting in possible execution errors #56479 @dash12653
      • Fix the issue that the data in the Stats Healthy Distribution panel of Grafana might be incorrect #57176 @hawkingrei
      • Fix the issue that vector search might return incorrect results when querying tables with clustered indexes #57627 @winoros
    • TiKV

      • Fix the panic issue that occurs when read threads access outdated indexes in the MemTable of the Raft Engine #17383 @LykxSassinator
      • Fix the issue that when a large number of transactions are queuing for lock release on the same key and the key is frequently updated, excessive pressure on deadlock detection might cause TiKV OOM issues #17394 @MyonKeminta
      • Fix the issue that CPU usage for background tasks of resource control is counted twice #17603 @glorv
      • Fix the issue that TiKV OOM might occur due to the accumulation of CDC internal tasks #17696 @3AceShowHand
      • Fix the issue that large batch writes cause performance jitter when raft-entry-max-size is set too high #17701 @SpadeA-Tang
      • Fix the issue that the leader could not be quickly elected after Region split #17602 @LykxSassinator
      • Fix the issue that TiKV might panic when executing queries containing RADIANS() or DEGREES() functions #17852 @gengliqi
      • Fix the issue that write jitter might occur when all hibernated Regions are awakened #17101 @hhwyt
    • PD

      • Fix the memory leak issue in hotspot cache #8698 @lhy1024
      • Fix the issue that the resource group selector does not take effect on any panel #56572 @glorv
      • Fix the issue that deleted resource groups still appear in the monitoring panel #8716 @AndreMouche
      • Fix unclear log descriptions during the Region syncer loading process #8717 @lhy1024
      • Fix the memory leak issue in label statistics #8700 @lhy1024
      • Fix the issue that configuring tidb_enable_tso_follower_proxy to 0 or OFF fails to disable the TSO Follower Proxy feature #8709 @JmPotato
    • TiFlash

      • Fix the issue that the SUBSTRING() function does not support the pos and len arguments for certain integer types, causing query errors #9473 @gengliqi
      • Fix the issue that vector search performance might degrade after scaling out TiFlash write nodes in the disaggregated storage and compute architecture #9637 @kolafish
      • Fix the issue that the SUBSTRING() function returns incorrect results when the second parameter is negative #9604 @guo-shaoge
      • Fix the issue that the REPLACE() function returns an error when the first parameter is a constant #9522 @guo-shaoge
      • Fix the issue that LPAD() and RPAD() functions return incorrect results in some cases #9465 @guo-shaoge
      • Fix the issue that after creating a vector index, if the internal task for building the vector index is unexpectedly interrupted, it could result in TiFlash writing corrupted data and being unable to restart #9714 @JaySon-Huang
    • Tools

      • Backup & Restore (BR)

        • Fix the OOM issue during backups when there are too many uncompleted range gaps, reducing the amount of pre-allocated memory #53529 @Leavrth
        • Fix the issue that global indexes cannot be backed up #57469 @Defined2014
        • Fix the issue that logs might print out encrypted information #57585 @kennytm
        • Fix the issue that the advancer cannot handle lock conflicts #57134 @3pointer
        • Fix potential security vulnerabilities by upgrading the k8s.io/api library version #57790 @BornChanger
        • Fix the issue that PITR tasks might return the Information schema is out of date error when there are a large number of tables in the cluster but the actual data size is small #57743 @Tristan1900
        • Fix the issue that log backup might unexpectedly enter a paused state when the advancer owner switches #58031 @3pointer
        • Fix the issue that the tiup br restore command omits checking whether the target cluster table already exists during database or table restoration, which might overwrite existing tables #58168 @RidRisR
      • TiCDC

        • Fix the issue that the Kafka messages lack Key fields when using the Debezium protocol #1799 @wk989898
        • Fix the issue that the redo module fails to properly report errors #11744 @CharlesCheung96
        • Fix the issue that TiCDC mistakenly discards DDL tasks when the schema versions of DDL tasks become non-incremental during TiDB DDL owner changes #11714 @wlwilliamx
      • TiDB Lightning

        • Fix the issue that TiDB Lightning fails to receive oversized messages sent from TiKV #56114 @fishiu
        • Fix the issue that the AUTO_INCREMENT value is set too high after importing data using the physical import mode #56814 @D3Hunter

    Contributors

    We would like to thank the following contributors from the TiDB community:

    Was this page helpful?