Determine Your TiDB Size

This document describes how to determine the size of a Dedicated Tier cluster.

Note

A Developer Tier cluster comes with a default cluster size, which cannot be changed.

Size TiDB

TiDB is for computing only and does not store data. It is horizontally scalable.

You can configure both node size and node quantity for TiDB.

TiDB node size

The supported node sizes include the following:

  • 4 vCPU, 16 GiB (Beta)
  • 8 vCPU, 16 GiB
  • 16 vCPU, 32 GiB
Note

If the node size of TiDB is set as 4 vCPU, 16 GiB (Beta), note the following restrictions:

  • The node quantity of TiDB can only be set to 1 or 2, and the node quantity of TiKV is fixed to 3.
  • TiDB can only be used with 4 vCPU TiKV.
  • TiFlash is unavailable.

TiDB node quantity

For high availability, it is recommended that you configure at least two TiDB nodes for each TiDB Cloud cluster.

For more information about how to determine the TiDB size, see Performance reference.

Size TiKV

TiKV is responsible for storing data. It is horizontally scalable.

You can configure node size, node quantity, and storage size for TiKV.

TiKV node size

The supported node sizes include the following:

  • 4 vCPU, 16 GiB (Beta)
  • 8 vCPU, 64 GiB
  • 16 vCPU, 64 GiB
Note

If the node size of TiKV is set as 4 vCPU, 16 GiB (Beta), note the following restrictions:

  • The node quantity of TiDB can only be set to 1 or 2, and the node quantity of TiKV is fixed to 3.
  • TiKV can only be used with 4 vCPU TiDB.
  • TiFlash is unavailable.

TiKV node quantity

The number of TiKV nodes should be at least 1 set (3 nodes in 3 different Available Zones).

TiDB Cloud deploys TiKV nodes evenly to all availability zones (at least 3) in the region you select to achieve durability and high availability. In a typical 3-replica setup, your data is distributed evenly among the TiKV nodes across all availability zones and is persisted to the disk of each TiKV node.

Note

When you scale your TiDB cluster, nodes in the 3 availability zones are increased or decreased at the same time. For how to scale in or scale out a TiDB cluster based on your needs, see Scale Your TiDB Cluster.

Minimum number of TiKV nodes: ceil(compressed size of your data ÷ one TiKV capacity) × the number of replicas

Supposing the size of your MySQL dump files is 5 TB and the TiDB compression ratio is 70%, the storage needed is 3584 GB.

For example, if you configure the storage size of each TiKV node on AWS as 1024 GB, the required number of TiKV nodes is as follows:

Minimum number of TiKV nodes: ceil(3584 ÷ 1024) × 3 = 12

For more information about how to determine the TiKV size, see Performance reference.

TiKV storage size

  • 8 vCPU or 16 vCPU TiKV supports up to 4 TiB storage capacity.
  • 4 vCPU TiKV supports up to 2 TiB storage capacity.
Note

You cannot decrease the TiKV storage size after the cluster creation.

Size TiFlash

TiFlash synchronizes data from TiKV in real time and supports real-time analytics workloads right out of the box. It is horizontally scalable.

You can configure node size, node quantity, and storage size for TiFlash.

TiFlash node size

The supported node sizes include the following:

  • 8 vCPU, 64 GiB
  • 16 vCPU, 128 GiB

Note that TiFlash is unavailable when the vCPU size of TiDB or TiKV is set as 4 vCPU, 16 GiB (Beta).

TiFlash node quantity

TiDB Cloud deploys TiFlash nodes evenly to different availability zones in a region. It is recommended that you configure at least two TiFlash nodes in each TiDB Cloud cluster and create at least two replicas of the data for high availability in your production environment.

The minimum number of TiFlash nodes depends on the TiFlash replica counts for specific tables:

Minimum number of TiFlash nodes: min((compressed size of table A * replicas for table A + compressed size of table B * replicas for table B) / size of each TiFlash capacity, max(replicas for table A, replicas for table B))

For example, if you configure the storage size of each TiFlash node on AWS as 1024 GB, and set 2 replicas for table A (the compressed size is 800 GB) and 1 replica for table B (the compressed size is 100 GB), then the required number of TiFlash nodes is as follows:

Minimum number of TiFlash nodes: min((800 GB * 2 + 100 GB * 1) / 1024 GB, max(2, 1)) ≈ 2

TiFlash storage size

TiFlash supports up to 2 TiB storage capacity.

Note

You cannot decrease the TiFlash storage size after the cluster creation.

Performance reference

This section provides TPC-C and Sysbench performance test results of five popular TiDB cluster scales, which can be taken as a reference when you determine the cluster size.

Test environment:

  • TiDB version: v5.4.0
  • Warehouses: 5000
  • Data size: 366 G
  • Table size: 10000000
  • Table count: 16

You can click any of the following scales to check its performance data.

TiDB: 4 vCPU * 2; TiKV: 4 vCPU * 3
  • Optimal performance with low latency

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC30014,53213,137608

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert3008,8488,84836
    Point Select60046,22446,22413
    Read Write15071914,385209
    Update Index1504,3464,34635
    Update Non-index60013,60313,60344
  • Maximum TPS and QPS

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC1,20015,20813,7482,321

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert1,50011,60111,601129
    Point Select60046,22446,22413
    Read Write15014,385719209
    Update Index1,2006,5266,526184
    Update Non-index1,50014,35114,351105
TiDB: 8 vCPU * 2; TiKV: 8 vCPU * 3
  • Optimal performance with low latency

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC60032,26629,168548

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert60017,83117,83134
    Point Select60093,28793,2876
    Read Write30029,7291,486202
    Update Index3009,4159,41532
    Update Non-index1,20031,09231,09239
  • Maximum TPS and QPS

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC1,20033,39430,1881,048

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert2,00023,63323,63384
    Point Select60093,28793,2876
    Read Write60030,4641,523394
    Update Index2,00015,14615,146132
    Update Non-index2,00034,50534,50558
TiDB: 8 vCPU * 4; TiKV: 8 vCPU * 6
  • Optimal performance with low latency

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC1,20062,91856,878310

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert1,20033,89233,89223
    Point Select1,200185,574181,2554
    Read Write60059,1602,958127
    Update Index60018,73518,73521
    Update Non-index2,40060,62960,62923
  • Maximum TPS and QPS

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC2,40065,45259,169570

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert4,00047,02947,02943
    Point Select1,200185,574181,2554
    Read Write1,20060,6243,030197
    Update Index4,00030,14030,14067
    Update Non-index4,00068,66468,66429
TiDB: 16 vCPU * 2; TiKV: 16 vCPU * 3
  • Optimal performance with low latency

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC1,20067,94161,419540

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert1,20035,09635,09634
    Point Select1,200228,600228,6005
    Read Write60073,1503,658164
    Update Index60018,88618,88632
    Update Non-index2,00063,83763,83731
  • Maximum TPS and QPS

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC1,20067,94161,419540

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert2,00043,33843,33846
    Point Select1,200228,600228,6005
    Read Write1,20073,6313,682326
    Update Index3,00029,57629,576101
    Update Non-index3,00064,62464,62446
TiDB: 16 vCPU * 4; TiKV: 16 vCPU * 6
  • Optimal performance with low latency

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC2,400133,164120,380305

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert2,40069,13969,13922
    Point Select2,400448,056448,0564
    Read Write1,200145,5687,31097
    Update Index1,20036,63836,63820
    Update Non-index4,000125,129125,12917
  • Maximum TPS and QPS

    TPC-C performance:

    Transaction modelThreadstpmCQPSLatency (ms)
    TPCC2,400133,164120,380305

    Sysbench OLTP performance:

    Transaction modelThreadsTPSQPSLatency (ms)
    Insert4,00086,24286,24225
    Point Select2,400448,056448,0564
    Read Write2,400146,5267,326172
    Update Index6,00058,85658,85651
    Update Non-index6,000128,601128,60124
Was this page helpful?