TiDB 7.1.2 Release Notes
Release date: October 25, 2023
TiDB version: 7.1.2
Quick access: Quick start | Production deployment
Compatibility changes
- Prohibit setting
require_secure_transport
toON
in Security Enhanced Mode (SEM) to prevent potential connectivity issues for users #47665 @tiancaiamao - Disable the smooth upgrade feature by default. You can enable it by sending the
/upgrade/start
andupgrade/finish
HTTP requests #47172 @zimulala - Introduce the
tidb_opt_enable_hash_join
system variable to control whether the optimizer selects hash joins for tables #46695 @coderplay - Disable periodic compaction of RocksDB by default, so that the default behavior of TiKV RocksDB is now consistent with that in versions before v6.5.0. This change prevents potential performance impact caused by a significant number of compactions after upgrading. In addition, TiKV introduces two new configuration items
rocksdb.[defaultcf|writecf|lockcf].periodic-compaction-seconds
androcksdb.[defaultcf|writecf|lockcf].ttl
, enabling you to manually configure periodic compaction of RocksDB #15355 @LykxSassinator - TiCDC introduces the
sink.csv.binary-encoding-method
configuration item to control the encoding method of binary data in the CSV protocol. The default value is'base64'
#9373 @CharlesCheung96 - TiCDC introduces the
large-message-handle-option
configuration item. It is empty by default, which means that the changefeed fails when the message size exceeds the limit of the Kafka topic. When this configuration is set to"handle-key-only"
, if the message exceeds the size limit, only the handle key will be sent to reduce the message size; if the reduced message still exceeds the limit, then the changefeed fails #9680 @3AceShowHand
Behavior changes
- For transactions containing multiple changes, if the primary key or non-null unique index value is modified in the update event, TiCDC splits an event into delete and insert events and ensures that all events follow the sequence of delete events preceding insert events. For more information, see documentation.
Improvements
TiDB
- Add new optimizer hints, including
NO_MERGE_JOIN()
,NO_INDEX_JOIN()
,NO_INDEX_MERGE_JOIN()
,NO_HASH_JOIN()
, andNO_INDEX_HASH_JOIN()
#45520 @qw4990 - Add request source information related to the coprocessor #46514 @you06
- Add the
/upgrade/start
andupgrade/finish
APIs to mark the start and end of the upgrade status for TiDB nodes #47172 @zimulala
- Add new optimizer hints, including
TiKV
- Optimize the compaction mechanism: when a Region is split, if there is no key to split, a compaction is triggered to eliminate excessive MVCC versions #15282 @SpadeA-Tang
- Eliminate LRUCache in Router objects to reduce memory usage and prevent OOM #15430 @Connor1996
- Add the
Max gap of safe-ts
andMin safe ts region
metrics and introduce thetikv-ctl get-region-read-progress
command to better observe and diagnose the status of resolved-ts and safe-ts #15082 @ekexium - Expose some RocksDB configurations in TiKV that allow users to disable features such as TTL and periodic compaction #14873 @LykxSassinator
- Add the backoff mechanism for the PD client in the process of connection retries, which gradually increases retry intervals during error retries to reduce PD pressure #15428 @nolouch
- Avoid holding mutex when writing Titan manifest files to prevent affecting other threads #15351 @Connor1996
- Optimize memory usage of Resolver to prevent OOM #15458 @overvenus
PD
TiFlash
- Add monitoring metrics for the memory usage of index data in Grafana #8050 @hongyunyan
Tools
Backup & Restore (BR)
- Enhance support for connection reuse of log backup and PITR restore tasks by setting
MaxIdleConns
andMaxIdleConnsPerHost
parameters in the HTTP client #46011 @Leavrth - Reduce the CPU overhead of log backup
resolve lock
#40759 @3pointer - Add a new restore parameter
WaitTiflashReady
. When this parameter is enabled, the restore operation will be completed after TiFlash replicas are successfully replicated #43828 #46302 @3pointer
- Enhance support for connection reuse of log backup and PITR restore tasks by setting
TiCDC
- Optimize several TiCDC monitoring metrics and alarm rules #9047 @asddongmen
- Kafka Sink supports sending only handle key data when a message is too large, avoiding changefeed failure caused by excessive message size #9680 @3AceShowHand
- Optimize the execution logic of replicating the
ADD INDEX
DDL operations to avoid blocking subsequent DML statements #9644 @sdojjy - Refine the status message when TiCDC retries after a failure #9483 @asddongmen
TiDB Data Migration (DM)
TiDB Lightning
Bug fixes
TiDB
- Fix the issue that
GROUP_CONCAT
cannot parse theORDER BY
column #41986 @AilinKid - Fix the issue that querying the system table
INFORMATION_SCHEMA.TIKV_REGION_STATUS
returns incorrect results in some cases #45531 @Defined2014 - Fix the issue that upgrading TiDB gets stuck when reading metadata takes longer than one DDL lease #45176 @zimulala
- Fix the issue that executing DML statements with CTE can cause panic #46083 @winoros
- Fix the issue of not being able to detect data that does not comply with partition definitions during partition exchange #46492 @mjonss
- Fix the issue that the results of
MERGE_JOIN
are incorrect #46580 @qw4990 - Fix the incorrect result that occurs when comparing unsigned types with
Duration
type constants #45410 @wshwsh12 - Fix the issue that
Duplicate entry
might occur whenAUTO_ID_CACHE=1
is set #46444 @tiancaiamao - Fix the memory leak issue when TTL is running #45510 @lcwangchao
- Fix the issue that killing a connection might cause go coroutine leaks #46034 @pingyu
- Fix the issue that an error in Index Join might cause the query to get stuck #45716 @wshwsh12
- Fix the issue that the
BatchPointGet
operator returns incorrect results for hash partitioned tables #46779 @jiyfhust - Fix the issue that restrictions on partitioned tables remain on the original table when
EXCHANGE PARTITION
fails or is canceled #45920 #45791 @mjonss - Fix the issue that the
TIDB_INLJ
hint does not take effect when joining two sub-queries #46160 @qw4990 - Fix the issue that the behavior is inconsistent with MySQL when comparing a
DATETIME
orTIMESTAMP
column with a number constant #38361 @yibin87 - Fix the issue that HashCode is repeatedly calculated for deeply nested expressions, which causes high memory usage and OOM #42788 @AilinKid
- Fix the issue that access path pruning logic ignores the
READ_FROM_STORAGE(TIFLASH[...])
hint, which causes theCan't find a proper physical plan
error #40146 @AilinKid - Fix the issue that the
cast(col)=range
condition causes FullScan when CAST has no precision loss #45199 @AilinKid - Fix the issue that
plan replayer dump explain
reports an error #46197 @time-and-fate - Fix the issue that the
tmp-storage-quota
configuration does not take effect #45161 #26806 @wshwsh12 - Fix the issue that the TiDB parser remains in a state and causes parsing failure #45898 @qw4990
- Fix the issue that when Aggregation is pushed down through Union in MPP execution plans, the results are incorrect #45850 @AilinKid
- Fix the issue that TiDB recovers slowly after a panic when
AUTO_ID_CACHE=1
is set #46454 @tiancaiamao - Fix the issue that the Sort operator might cause TiDB to crash during the spill process #47538 @windtalker
- Fix the issue of duplicate primary keys when using BR to restore non-clustered index tables with
AUTO_ID_CACHE=1
#46093 @tiancaiamao - Fix the issue that the query might report an error when querying partitioned tables in static pruning mode and the execution plan contains
IndexLookUp
#45757 @Defined2014 - Fix the issue that inserting data into a partitioned table might fail after exchanging partitions between the partition table and a table with placement policies #45791 @mjonss
- Fix the issue of encoding time fields with incorrect timezone information #46033 @tangenta
- Fix the issue that DDL statements that fast add indexes would get stuck when the
tmp
directory does not exist #45456 @tangenta - Fix the issue that upgrading multiple TiDB instances simultaneously might block the upgrade process #46228 @zimulala
- Fix the issue of uneven Region scattering caused by incorrect parameters used in splitting Regions #46135 @zimulala
- Fix the issue that DDL operations might get stuck after TiDB is restarted #46751 @wjhuang2016
- Prohibit split table operations on non-integer clustered indexes #47350 @tangenta
- Fix the issue that DDL operations might get permanently blocked due to incorrect MDL handling #46920 @wjhuang2016
- Fix the issue of duplicate rows in
information_schema.columns
caused by renaming a table #47064 @jiyfhust - Fix the panic issue of
batch-client
inclient-go
#47691 @crazycs520 - Fix the issue that statistics collection on partitioned tables is not killed in time when its memory usage exceeds memory limits #45706 @hawkingrei
- Fix the issue that query results are inaccurate when queries contain
UNHEX
conditions #45378 @qw4990 - Fix the issue that TiDB returns
Can't find column
for queries withGROUP_CONCAT
#41957 @AilinKid
- Fix the issue that
TiKV
- Fix the issue that the
ttl-check-poll-interval
configuration item does not take effect on RawKV API V2 #15142 @pingyu - Fix the data error of continuously increasing raftstore-applys #15371 @Connor1996
- Fix the issue that the QPS drops to zero in the sync-recover phase under the Data Replication Auto Synchronous mode #14975 @nolouch
- Fix the data inconsistency issue that might occur when one TiKV node is isolated and another node is restarted #15035 @overvenus
- Fix the issue that Online Unsafe Recovery cannot handle merge abort #15580 @v01dstar
- Fix the issue that network interruption between PD and TiKV might cause PITR to get stuck #15279 @YuJuncen
- Fix the issue that Region Merge might be blocked after executing
FLASHBACK
#15258 @overvenus - Fix the issue of heartbeat storms by reducing the number of store heartbeat retries #15184 @nolouch
- Fix the issue that Online Unsafe Recovery does not abort on timeout #15346 @Connor1996
- Fix the issue that encryption might cause data corruption during partial write #15080 @tabokie
- Fix the TiKV panic issue caused by incorrect metadata of Region #13311 @cfzjywxk
- Fix the issue that requests of the TiDB Lightning checksum coprocessor time out when there is online workload #15565 @lance6716
- Fix the issue that moving a peer might cause the performance of the Follower Read to deteriorate #15468 @YuJuncen
- Fix the issue that the
PD
- Fix the issue that hot Regions might not be scheduled in the v2 scheduler algorithm #6645 @lhy1024
- Fix the issue that the TLS handshake might cause high CPU usage in an empty cluster #6913 @nolouch
- Fix the issue that injection errors between PD nodes might cause PD panic #6858 @HuSharp
- Fix the issue that store information synchronization might cause the PD leader to exit and get stuck #6918 @rleungx
- Fix the issue that the Region information is not updated after Flashback #6912 @overvenus
- Fix the issue that PD might panic during exiting #7053 @HuSharp
- Fix the issue that the context timeout might cause the
lease timeout
error #6926 @rleungx - Fix the issue that peers are not properly scattered by group, which might cause uneven distribution of leaders #6962 @rleungx
- Fix the issue that the isolation level label is not synchronized when updating using pd-ctl #7121 @rleungx
- Fix the issue that
evict-leader-scheduler
might lose configuration #6897 @HuSharp - Fix potential security risks of the plugin directory and files #7094 @HuSharp
- Fix the issue that DDL might not guarantee atomicity after enabling resource control #45050 @glorv
- Fix the issue that unhealthy peers cannot be removed when rule checker selects peers #6559 @nolouch
- Fix the issue that when etcd is already started but the client has not yet connected to it, calling the client might cause PD to panic #6860 @HuSharp
- Fix the issue that RU consumption less than 0 causes PD to crash #6973 @CabinfeverB
- Fix the issue that the client-go regularly updating
min-resolved-ts
might cause PD OOM when the cluster is large #46664 @HuSharp
TiFlash
- Fix the issue that the memory usage reported by MemoryTracker is inaccurate #8128 @JinheLin
- Fix the issue that TiFlash data is inconsistent due to invalid range keys of a region #7762 @lidezhu
- Fix the issue that queries fail after
fsp
is changed forDATETIME
,TIMESTAMP
, orTIME
data type #7809 @JaySon-Huang - Fix the issue that when there are multiple HashAgg operators within the same MPP task, the compilation of the MPP task might take an excessively long time, severely affecting query performance #7810 @SeaRise
Tools
Backup & Restore (BR)
- Fix the issue that recovering implicit primary keys using PITR might cause conflicts #46520 @3pointer
- Fix the issue that PITR fails to recover data from GCS #47022 @Leavrth
- Fix the potential error in fine-grained backup phase in RawKV mode #37085 @pingyu
- Fix the issue that recovering meta-kv using PITR might cause errors #46578 @Leavrth
- Fix the errors in BR integration test cases #46561 @purelind
- Fix the issue of restore failures by increasing the default values of the global parameters
TableColumnCountLimit
andIndexLimit
used by BR to their maximum values #45793 @Leavrth - Fix the issue that the br CLI client gets stuck when scanning restored data #45476 @3pointer
- Fix the issue that PITR might skip restoring the
CREATE INDEX
DDL statement #47482 @Leavrth - Fix the issue that running PITR multiple times within 1 minute might cause data loss #15483 @YuJuncen
TiCDC
- Fix the issue that a replication task in an abnormal state blocks upstream GC #9543 @CharlesCheung96
- Fix the issue that replicating data to an object storage might cause data inconsistency #9592 @CharlesCheung96
- Fix the issue that enabling
redo-resolved-ts
might cause changefeed to fail #9769 @CharlesCheung96 - Fix the issue that fetching wrong memory information might cause OOM issues in some operating systems #9762 @sdojjy
- Fix the issue of uneven distribution of write keys among nodes when
scale-out
is enabled #9665 @sdojjy - Fix the issue that sensitive user information is recorded in the logs #9690 @sdojjy
- Fix the issue that TiCDC might incorrectly synchronize rename DDL operations #9488 #9378 #9531 @asddongmen
- Fix the issue that upstream TiDB GC is blocked after all changefeeds are removed #9633 @sdojjy
- Fix the issue that TiCDC replication tasks might fail in some corner cases #9685 #9697 #9695 #9736 @hicqu @CharlesCheung96
- Fix the issue of high TiCDC replication latency caused by network isolation of PD nodes #9565 @asddongmen
- Fix the issue that TiCDC accesses the invalid old address during PD scaling up and down #9584 @fubinzh @asddongmen
- Fix the issue that TiCDC cannot recover quickly from TiKV node failures when there are a lot of Regions upstream #9741 @sdojjy
- Fix the issue that TiCDC incorrectly changes the
UPDATE
operation toINSERT
when using the CSV format #9658 @3AceShowHand - Fix the issue that a replication error occurs when multiple tables are renamed in the same DDL statement on the upstream #9476 #9488 @CharlesCheung96 @asddongmen
- Fix the issue that the replication task fails due to short retry intervals when synchronizing to Kafka #9504 @3AceShowHand
- Fix the issue that replication write conflicts might occur when the unique keys for multiple rows are modified in one transaction on the upstream #9430 @sdojjy
- Fix the issue that the replication task might get stuck when the downstream encounters a short-term failure #9542 #9272 #9582 #9592 @hicqu
- Fix the issue that the replication task might get stuck when the downstream encounters an error and retries #9450 @hicqu
TiDB Data Migration (DM)
- Fix the issue that replication lag returned by DM keeps growing when a failed DDL is skipped and no subsequent DDLs are executed #9605 @D3Hunter
- Fix the issue that DM cannot handle conflicts correctly with case-insensitive collations #9489 @hihihuhu
- Fix the DM validator deadlock issue and enhance retries #9257 @D3Hunter
- Fix the issue that DM skips all DMLs when resuming a task in optimistic mode #9588 @GMHDBJD
- Fix the issue that DM cannot properly track upstream table schemas when skipping online DDLs #9587 @GMHDBJD
- Fix the issue that DM skips partition DDLs in optimistic mode #9788 @GMHDBJD
TiDB Lightning
- Fix the issue that when importing a table with
AUTO_ID_CACHE=1
, a wrongrow_id
is assigned #46100 @D3Hunter - Fix the issue that the data type is wrong when saving
NEXT_GLOBAL_ROW_ID
#45427 @lyzx2001 - Fix the issue that checksum still reports errors when
checksum = "optional"
#45382 @lyzx2001 - Fix the issue that data import fails when the PD cluster address changes #43436 @lichunzhu
- Fix the issue that TiDB Lightning fails to start when PD topology is changed #46688 @lance6716
- Fix the issue that route might panic when importing CSV data #43284 @lyzx2001
- Fix the issue that when importing a table with
TiDB Binlog