TiDB 7.1.3 Release Notes
Release date: December 21, 2023
TiDB version: 7.1.3
Quick access: Quick start | Production deployment
Compatibility changes
- After further testing, the default value of the TiCDC Changefeed configuration item
case-sensitive
is changed fromtrue
tofalse
. This means that by default, table and database names in the TiCDC configuration file are case-insensitive #10047 @sdojjy - TiCDC Changefeed introduces the following new configuration items:
sql-mode
: enables you to set the SQL mode used by TiCDC to parse DDL statements when TiCDC replicates data #9876 @asddongmenencoding-worker-num
andflush-worker-num
: enables you to set different concurrency parameters for the redo module based on specifications of different machines #10048 @CharlesCheung96compression
: enables you to configure the compression behavior of redo log files #10176 @sdojjysink.cloud-storage-config
: enables you to set the automatic cleanup of historical data when replicating data to object storage #10109 @CharlesCheung96
Improvements
TiDB
- Support the
FLASHBACK CLUSTER TO TSO
syntax #48372 @BornChanger
- Support the
PD
Tools
Backup & Restore (BR)
- Enable automatic retry of Region scatter during snapshot recovery when encountering timeout failures or cancellations of Region scatter #47236 @Leavrth
- During restoring a snapshot backup, BR retries when it encounters certain network errors #48528 @Leavrth
- Introduce a new integration test for Point-In-Time Recovery (PITR) in the
delete range
scenario, enhancing PITR stability #47738 @Leavrth
TiCDC
- Optimize the memory consumption when TiCDC nodes replicate data to TiDB #9935 @3AceShowHand
- Optimize some alarm rules #9266 @asddongmen
- Optimize the performance of redo log, including parallel writing data to S3 and adopting lz4 compression algorithm #10176 #10226 @sdojjy
- Improve the performance of TiCDC replicating data to object storage by increasing parallelism #10098 @CharlesCheung96
- Reduce the impact of TiCDC incremental scanning on upstream TiKV #11390 @hicqu
- Support making TiCDC Canal-JSON content format compatible with the content format of the official Canal output by setting
content-compatible=true
in thesink-uri
configuration #10106 @3AceShowHand
TiDB Lightning
Bug fixes
TiDB
- Fix the issue that queries containing common table expressions (CTEs) unexpectedly get stuck when the memory limit is exceeded #49096 @AilinKid
- Fix the issue that high CPU usage of TiDB occurs due to long-term memory pressure caused by
tidb_server_memory_limit
#48741 @XuHuaiyu - Fix the issue that queries containing CTEs report
runtime error: index out of range [32] with length 32
whentidb_max_chunk_size
is set to a small value #48808 @guo-shaoge - Fix the issue that the query result is incorrect when an
ENUM
type column is used as the join key #48991 @winoros - Fix the parsing error caused by aggregate or window functions in recursive CTEs #47711 @elsa0520
- Fix the issue that
UPDATE
statements might be incorrectly converted to PointGet #47445 @hi-rustin - Fix the OOM issue that might occur when TiDB performs garbage collection on the
stats_history
table #48431 @hawkingrei - Fix the issue that the same query plan has different
PLAN_DIGEST
values in some cases #47634 @King-Dylan - Fix the issue that
GenJSONTableFromStats
cannot be killed when it consumes a large amount of memory #47779 @hawkingrei - Fix the issue that the result might be incorrect when predicates are pushed down to common table expressions #47881 @winoros
- Fix the issue that
Duplicate entry
might occur whenAUTO_ID_CACHE=1
is set #46444 @tiancaiamao - Fix the issue that TiDB server might consume a significant amount of resources when the enterprise plugin for audit logging is used #49273 @lcwangchao
- Fix the issue that TiDB server might panic during graceful shutdown #36793 @bb7133
- Fix the issue that tables with
AUTO_ID_CACHE=1
might lead to gRPC client leaks when there are a large number of tables #48869 @tiancaiamao - Fix the incorrect error message for
ErrLoadDataInvalidURI
(invalid S3 URI error) #48164 @lance6716 - Fix the issue that executing
ALTER TABLE ... LAST PARTITION
fails when the partition column type isDATETIME
#48814 @crazycs520 - Fix the issue that the actual error message during
IMPORT INTO
execution might be overridden by other error messages #47992 #47781 @D3Hunter - Fix the issue that TiDB deployed in the cgroup v2 container cannot be detected #48342 @D3Hunter
- Fix the issue that executing
UNION ALL
with the DUAL table as the first subnode might cause an error #48755 @winoros - Fix the TiDB node panic issue that occurs when DDL
jobID
is restored to 0 #46296 @jiyfhust - Fix the issue of unsorted row data returned by
TABLESAMPLE
#48253 @tangenta - Fix the issue that panic might occur when
tidb_enable_ordered_result_mode
is enabled #45044 @qw4990 - Fix the issue that the optimizer mistakenly selects IndexFullScan to reduce sort introduced by window functions #46177 @qw4990
- Fix the issue of not handling locks in the MVCC interface when reading schema diff commit versions from the TiDB schema cache #48281 @cfzjywxk
- Fix the issue of incorrect memory usage estimation in
INDEX_LOOKUP_HASH_JOIN
#47788 @SeaRise - Fix the issue of
IMPORT INTO
task failure caused by PD leader malfunction for 1 minute #48307 @D3Hunter - Fix the panic issue of
batch-client
inclient-go
#47691 @crazycs520 - Fix the issue that column pruning can cause panic in specific situations #47331 @hi-rustin
- Fix the issue that TiDB does not read
cgroup
resource limits when it is started withsystemd
#47442 @hawkingrei - Fix the issue of possible syntax error when a common table expression (CTE) containing aggregate or window functions is referenced by other recursive CTEs #47603 #47711 @elsa0520
- Fix the panic issue that might occur when constructing TopN structure for statistics #35948 @hi-rustin
- Fix the issue that the result of
COUNT(INT)
calculated by MPP might be incorrect #48643 @AilinKid - Fix the issue that the chunk cannot be reused when the HashJoin operator performs probe #48082 @wshwsh12
TiKV
- Fix the issue that if TiKV runs extremely slowly, it might panic after Region merge #16111 @overvenus
- Fix the issue that Resolved TS might be blocked for two hours #15520 #39130 @overvenus
- Fix the issue that TiKV reports the
ServerIsBusy
error because it can not append the raft log #15800 @tonyxuqqi - Fix the issue that snapshot restore might get stuck when BR crashes #15684 @YuJuncen
- Fix the issue that Resolved TS in stale read might cause TiKV OOM issues when tracking large transactions #14864 @overvenus
- Fix the issue that damaged SST files might be spread to other TiKV nodes #15986 @Connor1996
- Fix the issue that the joint state of DR Auto-Sync might time out when scaling out #15817 @Connor1996
- Fix the issue that the scheduler command variables are incorrect in Grafana on the cloud environment #15832 @Connor1996
- Fix the issue that stale peers are retained and block resolved-ts after Regions are merged #15919 @overvenus
- Fix the issue that Online Unsafe Recovery cannot handle merge abort #15580 @v01dstar
- Fix the TiKV OOM issue that occurs when restarting TiKV and there are a large number of Raft logs that are not applied #15770 @overvenus
- Fix security issues by upgrading the version of
lz4-sys
to 1.9.4 #15621 @SpadeA-Tang - Fix the issue that
blob-run-mode
in Titan cannot be updated online #15978 @tonyxuqqi - Fix the issue that network interruption between PD and TiKV might cause PITR to get stuck #15279 @YuJuncen
- Fix the issue that TiKV coprocessor might return stale data when removing a Raft peer #16069 @overvenus
PD
- Fix the issue that the
resource_manager_resource_unit
metric is empty in TiDB Dashboard when executingCALIBRATE RESOURCE
#45166 @CabinfeverB - Fix the issue that the Calibrate by Workload page reports an error #48162 @CabinfeverB
- Fix the issue that deleting a resource group can damage DDL atomicity #45050 @glorv
- Fix the issue that when PD leader is transferred and there is a network partition between the new leader and the PD client, the PD client fails to update the information of the leader #7416 @CabinfeverB
- Fix the issue that adding multiple TiKV nodes to a large cluster might cause TiKV heartbeat reporting to become slow or stuck #7248 @rleungx
- Fix the issue that TiDB Dashboard cannot read PD
trace
data correctly #7253 @nolouch - Fix some security issues by upgrading the version of Gin Web Framework from v1.8.1 to v1.9.1 #7438 @niubell
- Fix the issue that the rule checker does not add Learners according to the configuration of Placement Rules #7185 @nolouch
- Fix the issue that PD might delete normal Peers when TiKV nodes are unavailable #7249 @lhy1024
- Fix the issue that it takes a long time to switch the leader in DR Auto-Sync mode #6988 @HuSharp
- Fix the issue that the
TiFlash
- Fix the issue that executing the
ALTER TABLE ... EXCHANGE PARTITION ...
statement causes panic #8372 @JaySon-Huang - Fix the issue of memory leak when TiFlash encounters memory limitation during query #8447 @JinheLin
- Fix the issue that data of TiFlash replicas would still be garbage collected after executing
FLASHBACK DATABASE
#8450 @JaySon-Huang - Fix incorrect display of maximum percentile time for some panels in Grafana #8076 @JaySon-Huang
- Fix the issue that a query returns the unexpected error message "Block schema mismatch in FineGrainedShuffleWriter-V1" #8111 @SeaRise
- Fix the issue that executing the
Tools
Backup & Restore (BR)
- Fix the issue that the default values for BR SQL commands and CLI are different, which might cause OOM issues #48000 @YuJuncen
- Fix the issue that the log backup might get stuck in some scenarios when backing up large wide tables #15714 @YuJuncen
- Fix the issue that BR generates incorrect URIs for external storage files #48452 @3AceShowHand
- Fix the issue that the retry after an EC2 metadata connection reset causes degraded backup and restore performance #47650 @Leavrth
- Fix the issue that the log backup task can start but does not work properly if failing to connect to PD during task initialization #16056 @YuJuncen
TiCDC
- Fix the issue that the
WHERE
clause does not use the primary key as a condition when replicatingDELETE
statements in certain scenarios #9812 @asddongmen - Fix the issue that replication tasks get stuck in certain special scenarios when replicating data to object storage #10041 #10044 @CharlesCheung96
- Fix the issue that replication tasks get stuck in certain special scenarios after enabling sync-point and redo log #10091 @CharlesCheung96
- Fix the issue that TiCDC mistakenly closes the connection with TiKV in certain special scenarios #10239 @hicqu
- Fix the issue that the changefeed cannot replicate DML events in bidirectional replication mode if the target table is dropped and then recreated in upstream #10079 @asddongmen
- Fix the performance issue caused by accessing NFS directories when replicating data to an object store sink #10041 @CharlesCheung96
- Fix the issue that the TiCDC server might panic when replicating data to an object storage service #10137 @sdojjy
- Fix the issue that the interval between replicating DDL statements is too long when redo log is enabled #9960 @CharlesCheung96
- Fix the issue that an owner node gets stuck due to NFS failure when the redo log is enabled #9886 @3AceShowHand
- Fix the issue that the
TiDB Data Migration (DM)
TiDB Lightning
- Fix the issue that data import fails due to the PD leader being killed or slow processing of PD requests #46950 #48075 @D3Hunter
- Fix the issue that TiDB Lightning gets stuck during
writeToTiKV
#46321 #48352 @lance6716 - Fix the issue that data import fails because HTTP retry requests do not use the current request content #47930 @lance6716
- Remove unnecessary
get_regions
calls in physical import mode #45507 @mittalrishabh