Sign InTry Free

Tune Region Performance

This document introduces how to tune Region performance by adjusting the Region size and how to use bucket to optimize concurrent queries when the Region size is large. In addition, this document introduces how to enhance the ability of PD to provide Region information to TiDB nodes by enabling the Active PD Follower feature.

Overview

TiKV automatically shards bottom-layered data. Data is split into multiple Regions based on the key ranges. When the size of a Region exceeds a threshold, TiKV splits it into two or more Regions.

In scenarios involving large datasets, if the Region size is relatively small, TiKV might have too many Regions, which causes more resource consumption and performance regression. Since v6.1.0, TiDB supports customizing Region size. The default size of a Region is 96 MiB. To reduce the number of Regions, you can adjust Regions to a larger size.

To reduce the performance overhead of many Regions, you can also enable Hibernate Region or Region Merge.

Use region-split-size to adjust Region size

To adjust the Region size, you can use the coprocessor.region-split-size configuration item. When TiFlash is used, the Region size should not exceed 256 MiB.

When the Dumpling tool is used, the Region size should not exceed 1 GiB. In this case, you need to reduce the concurrency after increasing the Region size; otherwise, TiDB might run out of memory.

Use bucket to increase concurrency

After Regions are set to a larger size, if you want to further improve the query concurrency, you can set coprocessor.enable-region-bucket to true. When you use this configuration, Regions are divided into buckets. Buckets are smaller ranges within a Region and are used as the unit of concurrent query to improve the scan concurrency. You can control the bucket size using coprocessor.region-bucket-size.

Use the Active PD Follower feature to enhance the scalability of PD's Region information query service

In a TiDB cluster with a large number of Regions, the PD leader might experience high CPU load due to the increased overhead of handling heartbeats and scheduling tasks. If the cluster has many TiDB instances, and there is a high concurrency of requests for Region information, the CPU pressure on the PD leader increases further and might cause PD services to become unavailable.

To ensure high availability, the PD leader synchronizes Region information with its followers in real time. PD followers maintain and store Region information in memory, enabling them to process Region information requests. You can enable the Active PD Follower feature by setting the system variable pd_enable_follower_handle_region to ON. After this feature is enabled, TiDB evenly distributes Region information requests to all PD servers, and PD followers can also directly handle Region requests, thereby reducing the CPU pressure on the PD leader.

PD ensures that the Region information in TiDB is always up-to-date by maintaining the status of Region synchronization streams and using the fallback mechanism of TiKV client-go.

  • When the network between the PD leader and a follower is unstable or a follower is unavailable, the Region synchronization stream is disconnected, and the PD follower rejects Region information requests. In this case, TiDB automatically retries the request to the PD leader and temporarily marks the follower as unavailable.
  • When the network is stable, because there might be a delay in the synchronization between the leader and the follower, some Region information obtained from the follower might be outdated. In this case, if the KV request corresponding to the Region fails, TiDB automatically re-requests the latest Region information from the PD leader and sends the KV request to TiKV again.

Was this page helpful?

Download PDFRequest docs changesAsk questions on DiscordEdit this page
Playground
New
One-stop & interactive experience of TiDB's capabilities WITHOUT registration.
Products
TiDB
TiDB Dedicated
TiDB Serverless
Pricing
Get Demo
Get Started
© 2024 PingCAP. All Rights Reserved.
Privacy Policy.